Search Results

Search found 5671 results on 227 pages for 'final'.

Page 199/227 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Building extensions for Expression Blend 4 using MEF

    - by Timmy Kokke
    Introduction Although it was possible to write extensions for Expression Blend and Expression Design, it wasn’t very easy and out of the box only one addin could be used. With Expression Blend 4 it is possible to write extensions using MEF, the Managed Extensibility Framework. Until today there’s no documentation on how to build these extensions, so look thru the code with Reflector is something you’ll have to do very often. Because Blend and Design are build using WPF searching the visual tree with Snoop and Mole belong to the tools you’ll be using a lot exploring the possibilities.  Configuring the extension project Extensions are regular .NET class libraries. To create one, load up Visual Studio 2010 and start a new project. Because Blend is build using WPF, choose a WPF User Control Library from the Windows section and give it a name and location. I named mine DemoExtension1. Because Blend looks for addins named *.extension.dll  you’ll have to tell Visual Studio to use that in the Assembly Name. To change the Assembly Name right click your project and go to Properties. On the Application tab, add .Extension to name already in the Assembly name text field. To be able to debug this extension, I prefer to set the output path on the Build tab to the extensions folder of Expression Blend. This means that everything that used to go into the Debug folder is placed in the extensions folder. Including all referenced assemblies that have the copy local property set to false. One last setting. To be able to debug your extension you could start Blend and attach the debugger by hand. I like it to be able to just hit F5. Go to the Debug tab and add the the full path to Blend.exe in the Start external program text field. Extension Class Add a new class to the project.  This class needs to be inherited from the IPackage interface. The IPackage interface can be found in the Microsoft.Expression.Extensibility namespace. To get access to this namespace add Microsoft.Expression.Extensibility.dll to your references. This file can be found in the same folder as the (Expression Blend 4 Beta) Blend.exe file. Make sure the Copy Local property is set to false in this reference. After implementing the interface the class would look something like: using Microsoft.Expression.Extensibility; namespace DemoExtension1 { public class DemoExtension1:IPackage { public void Load(IServices services) { } public void Unload() { } } } These two methods are called when your addin is loaded and unloaded. The parameter passed to the Load method, IServices services, is your main entry point into Blend. The IServices interface exposes the GetService<T> method. You will be using this method a lot. Almost every part of Blend can be accessed thru a service. For example, you can use to get to the commanding services of Blend by calling GetService<ICommandService>() or to get to the Windowing services by calling GetService<IWindowService>(). To get Blend to load the extension we have to implement MEF. (You can get up to speed on MEF on the community site or read the blog of Mr. MEF, Glenn Block.)  In the case of Blend extensions, all that needs to be done is mark the class with an Export attribute and pass it the type of IPackage. The Export attribute can be found in the System.ComponentModel.Composition namespace which is part of the .NET 4 framework. You need to add this to your references. using System.ComponentModel.Composition; using Microsoft.Expression.Extensibility;   namespace DemoExtension1 { [Export(typeof(IPackage))] public class DemoExtension1:IPackage { Blend is able to find your addin now. Adding UI The addin doesn’t do very much at this point. The WPF User Control Library came with a UserControl so lets use that in this example. I just drop a Button and a TextBlock onto the surface of the control to have something to show in the demo. To get the UserControl to work in Blend it has to be registered with the WindowService.  Call GetService<IWindowService>() on the IServices interface to get access to the windowing services. The UserControl will be used in Blend on a Palette and has to be registered to enable it. This is done by calling the RegisterPalette on the IWindowService interface and passing it an identifier, an instance of the UserControl and a caption for the palette. public void Load(IServices services) { IWindowService windowService = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(); windowService.RegisterPalette("DemoExtension", uc, "Demo Extension"); } After hitting F5 to start debugging Expression Blend will start. You should be able to find the addin in the Window menu now. Activating this window will show the “Demo Extension” palette with the UserControl, style according to the settings of Blend. Now what? Because little is publicly known about how to access different parts of Blend adding breakpoints in Debug mode and browsing thru objects using the Quick Watch feature of Visual Studio is something you have to do very often. This demo extension can be used for that purpose very easily. Add the click event handler to the button on the UserControl. Change the contructor to take the IServices interface and store this in a field. Set a breakpoint in the Button_Click method. public partial class UserControl1 : UserControl { private readonly IServices _services;   public UserControl1(IServices services) { _services = services; InitializeComponent(); }   private void button1_Click(object sender, RoutedEventArgs e) { } } Change the call to the constructor in the load method and pass it the services property. public void Load(IServices services) { IWindowService service = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(services); service.RegisterPalette("DemoExtension", uc, "Demo Extension"); } Hit F5 to compile and start Blend. Got to the window menu and start show the addin. Click on  the button to hit the breakpoint. Now place the carrot text _services text in the code window and hit Shift+F9 to show the Quick Watch window. Now start exploring and discovering where to find everything you need.  More Information The are no official resources available yet. Microsoft has released one extension for expression Blend that is very useful as a reference, the Microsoft Expression Blend® Add-in Preview for Windows® Phone. This will install a .extension.dll file in the extension folder of Blend. You can load this file with Reflector and have a peek at how Microsoft is building his addins. Conclusion I hope this gives you something to get started building extensions for Expression Blend. Until Microsoft releases the final version, which hopefully includes more information about building extensions, we’ll have to work on documenting it in the community.

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 2

    - by AjarnMark
    In Part 1, I started talking about using Red-Gate’s newest version of SQL Source Control and how I really like it as a viable method to source control your database development.  It looks like this is going to turn into a little series where I will explain how we have done things in the past, and how life is different with SQL Source Control.  I will also explain some of my philosophy and methodology around deployment with these tools.  But for now, let’s talk about some of the good and the bad of the tool itself. More Kudos and Features I mentioned previously how impressed I was with the responsiveness of Red-Gate’s team.  I have been having an ongoing email conversation with Gyorgy Pocsi, and as I have run into problems or requested things behave a little differently, it has not been more than a day or two before a new Build is ready for me to download and test.  Quite impressive! I’m sure much of the requests I put in were already in the plans, so I can’t really take credit for them, but throughout this conversation, Red-Gate has implemented several features that were not in the first Early Access version.  Those include: Honoring the Fortress configuration option to require Work Item (Bug) IDs on check-ins. Adding the check-in comment text as a comment to the Work Item. Adding the list of checked-in files, along with the Fortress links for automatic History and DIFF view Updating the status of a Work Item on check-in (e.g. setting the item to Complete or, in our case “Dev-Complete”) Support for the Fortress 2.0 API, and not just the Vault Pro 5.1 API.  (See later notes regarding support for Fortress 2.0). These were all features that I felt we really needed to have in-place before I could honestly consider converting my team to using SQL Source Control on a regular basis.  Now that I have those, my only excuse is not wanting to switch boats on the team mid-stream.  So when we wrap up our current release in a few weeks, we will make the jump.  In the meantime, I will continue to bang on it to make sure it is stable.  It passed one test for stability when I did a test load of one of our larger database schemas into Fortress with SQL Source Control.  That database has about 150 tables, 200 User-Defined Functions and nearly 900 Stored Procedures.  The initial load to source control went smoothly and took just a brief amount of time. Warnings Remember that this IS still in pre-release stage and while I have not had any problems after that first hiccup I wrote about last time, you still need to treat it with a healthy respect.  As I understand it, the RTM is targeted for February.  There are a couple more features that I hope make it into the final release version, but if not, they’ll probably be coming soon thereafter.  Those are: A Browse feature to let me lookup the Work Item ID instead of having to remember it or look back in my Item details.  This is just a matter of convenience. I normally have my Work Item list open anyway, so I can easily look it up, but hey, why not make it even easier. A multi-line comment area.  The current space for writing check-in comments is a single-line text box.  I would like to have a multi-line space as I sometimes write lengthy commentary.  But I recognize that it is a struggle to get most developers to put in more than the word “fixed” as their comment, so this meets the need of the majority as-is, and it’s not a show-stopper for us. Merge.  SQL Source Control currently does not have a Merge feature.  If two or more people make changes to the same database object, you will get a warning of the conflict and have to choose which one wins (and then manually edit to include the others’ changes).  I think it unlikely you will run into actual conflicts in Stored Procedures and Functions, but you might with Views or Tables.  This will be nice to have, but I’m not losing any sleep over it.  And I have multiple tools at my disposal to do merges manually, so really not a show-stopper for us. Automation has its limits.  As cool as this automation is, it has its limits and there are some changes that you will be better off scripting yourself.  For example, if you are refactoring table definitions, and want to change a column name, you can write that as a quick sp_rename command and preserve the data within that column.  But because this tool is looking just at a before and after picture, it cannot tell that you just renamed a column.  To the tool, it looks like you dropped one column and added another.  This is not a knock against Red-Gate.  All automated scripting tools have this issue, unless the are actively monitoring your every step to know exactly what you are doing.  This means that when you go to Deploy your changes, SQL Compare will script the change as a column drop and add, or will attempt to rebuild the entire table.  Unfortunately, neither of these approaches will preserve the existing data in that column the way an sp_rename will, and so you are better off scripting that change yourself.  Thankfully, SQL Compare will produce warnings about the potential loss of data before it does the actual synchronization and give you a chance to intercept the script and do it yourself. Also, please note that the current official word is that SQL Source Control supports Vault Professional 5.1 and later.  Vault Professional is the new name for what was previously known as Fortress.  (You can read about the name change on SourceGear’s site.)  The last version of Fortress was 2.x, and the API for Fortress 2.x is different from the API for Vault Pro.  At my company, we are currently running Fortress 2.0, with plans to upgrade to Vault Pro early next year.  Gyorgy was able to come up with a work-around for me to be able to use SQL Source Control with Fortress 2.0, even though it is not officially supported.  If you are using Fortress 2.0 and want to use SQL Source Control, be aware that this is not officially supported, but it is working for us, and you can probably get the work-around instructions from Red-Gate if you’re really, really nice to them. Upcoming Topics Some of the other topics I will likely cover in this series over the next few weeks are: How we used to do source control back in the old days (a few weeks ago) before SQL Source Control was available to Vault users What happens when you restore a database that is linked to source control Handling multiple development branches of source code Concurrent Development practices and handling Conflicts Deployment Tips and Best Practices A recap after using the tool for a while

    Read the article

  • When OneTug Just Isn&rsquo;t Enough&hellip;

    - by onefloridacoder
    I stole that from the back of a T-shirt I saw at the Orlando Code Camp 2010.  This was my first code camp and my first time volunteering for an event like this as well.  It was an awesome day.  I cannot begin to count the “aaahh”, “I did-not-know I could do that”, in the crowds and for myself.  I think it was a great day of learning for everyone at all levels.  All of the presenters were different and provided great insights into the topics they were presenting.  Here’s a list of the ones that I attended. KodeFuGuru, “Pirates vs. Ninjas” He touched on many good topics to relax some of the ways we think when we are writing out code, and still looks good, readable, etc.  As he pointed out in all of his examples, we might not always realize everything that’s going on under the covers.  He exposed a bug in his own code, and verbalized the mental gymnastics he went through when he knew there was something wrong with one of his IEnumerable implementations.  For me, it was great to hear that someone else labors over these gut reactions to code quickly snapped together, to the point that we rush to the refactor stage to fix what’s bothering us – and learn.  He has some content on extension methods that was very interesting.  My “that is so cool” moment was when he swapped out AddEntity method on an entity class and used a With extension method instead.  Some of the LINQ scales fell off my eyes at that moment, and I realized my own code could be a lot more powerful (and readable) if incorporate a few of these examples at the appropriate times.  And he cautioned as well… “don’t go crazy with this stuff”, there’s a place and time for everything.  One of his examples demo’d toward the end of the talk is on his sight where he’s chaining methods together, cool stuff. Quotes I liked: “Extension Methods - Extension methods to put features back on the model type, without impacting the type.” “Favor Declarative Code” – Check out the ? and ?? operators if you’re not already using them. “Favor Fluent Code” “Avoid Pirate Ninja Zombies!  If you see one run!” I’m definitely going to be looking at “Extract Projection” when I get into VS2010. BDD 101 – Sean Chambers http://github.com/schambers This guy had a whole host of gremlins against him, final score Sean 5, Gremlins 1.  He ran the code samples from his github repo  in the code github code viewer since the PC they school gave him to use didn’t have VS installed. He did a great job of converting the grammar between BDD and TDD, and how this style of development can be used in integration tests as well as the different types of gated builds on a CI box – he didn’t go into a discussion around CI, but we could infer that it could work. Like when we use WSSF, it does cause a class explosion to happen however the amount of code per class it limit to just covering the concern at hand – no more, no less.  As in “When I as a <Role>, expect {something} to happen, because {}”  This keeps us (the developer) from gold plating our solutions and creating less waste.  He basically keeps the code that prove out the requirement to two lines of code.  Nice. He uses SpecUnit to merge this grammar into his .NET projects and gave an overview on how this ties into writing his own BDD tests.  Some folks were familiar with Given / When / Then as story acceptance criteria and here’s how he mapped it: “Given <Context>  When <Something Happens> Then <I expect...>”  There are a few base classes and overrides in the SpecUnit framework that help with setting up the context for each test which looked very handy. Successfully Running Your Own Coding Business The speaker ran through a list of items that sounded like common sense stuff LLC, banking, separating expenses, etc.  Then moved into role playing with business owners and an ISV.  That was pretty good stuff, it pays to be a good listener all of the time even if your client is sitting on the other side of the phone tearing you head off for you – but that’s all it is, and get used to it its par for the course.  Oh, yeah always answer the phone was one simple thing that you can do to move  your business forward.  But like Cory Foy tweeted this week, “If you owe me a lot of money, don’t have a message that says your away for five weeks skiing in Colorado.”  Lots of food for thought that’s on my list of “todo’s and to-don’ts”. Speaker Idol Next, I had the pleasure of helping Russ Fustino tape this part of Code Camp as my primary volunteer opportunity that day.  You remember Russ, “know the code” from the awesome Russ’ Tool Shed series.  He did a great job orchestrating and capturing the Speaker Idol finals.   So I didn’t actually miss any sessions, but was able to see three back to back in one setting.  The idol finalists gave a 10 minute talk and very deep subjects, but different styles of talks.  No one walked away empty handed for jobs very well done.  Russ has details on his site.  The pictures and  video captured is supposed to be published on Channel 9 at a later date.  It was also a valuable experience to see what makes technical speakers effective in their talks.  I picked up quite a few speaking tips from what I heard from the judges and contestants. Design For Developers – Diane Leeper If you are a great developer, you’re probably a lousy designer.  Diane didn’t come to poke holes in what we think we can do with UI layout and design, but she provided some tools we can use to figure out metaphors for visualizing data.  If you need help with that check out Silverlight Pivot – that’s what she was getting at.  I was first introduced to her at one of John Papa’s talks last year at a Lakeland User Group meeting and she’s very passionate about design.  She was able to discuss different elements of Pivot, while to a developer is just looked cool. I believe she was providing the deck from her talk to folks after her talk, so send her an email if you’re interested.   She says she can talk about design for hours and hours – we all left that session believing her.   Rinse and Repeat Orlando Code Camp 2010 was awesome, and would totally do it again.  There were lots of folks from my shop there, and some that have left my shop to go elsewhere.  So it was a reunion of sorts and a great celebration for the simple fact that its great to be a developer and there’s a community that supports and recognizes it as well.  The sponsors were generous and the organizers were very tired, namely Esteban Garcia and Will Strohl who were responsible for making a lot of this magic happen.  And if you don’t believe me, check out the chatter on Twitter.

    Read the article

  • Controlar Autentificaci&oacute;n Crystal Reports

    - by Jason Ulloa
    Para todos los que hemos trabajamos con Crystal Reports, no es un secreto que cuando tratamos de conectar nuestro reporte directamente a la base de datos, se nos viene encima el problema de autenticación. Es decir nuestro reporte al momento de iniciar la carga nos solicita autentificarnos en el servidor y sino lo hacemos, simplemente no veremos el reporte. Esto, además de ser tedioso para los usuarios se convierte en un problema de seguridad bastante grande, de ahí que en la mayoría de los casos se recomienda utilizar dataset. Sin embargo, para todos los que aún sabiendo esto no desean utilizar datasets, sino que, quieren conectar su crystal directamente veremos como implementar una pequeña clase que nos ayudará con esa tarea. Generalmente, cuando trabajamos con una aplicación web, nuestra cadena de conexión esta incluida en el web.config y también en muchas ocasiones contiene los datos como el usuario y password para acceder a la base de datos.  De esta cadena de conexión y estos datos es de los que nos ayudaremos para implementar la autentificación en el reporte. Generalmente, la cadena de conexión se vería así <connectionStrings> <remove name="LocalSqlServer"/> <add name="xxx" connectionString="Data Source=.\SqlExpress;Integrated Security=False;Initial Catalog=xxx;user id=myuser;password=mypass" providerName="System.Data.SqlClient"/> </connectionStrings>   Para nuestro ejemplo, nombraremos a nuestra clase CrystalRules (es solo algo que pensé de momento) 1. Primer Paso Creamos una variable de tipo SqlConnectionStringBuilder, a la cual le asignaremos la cadena de conexión que definimos en el web.config, y que luego utilizaremos para obtener los datos del usuario y el password para el crystal report. SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings["xxx"].ConnectionString); 2. Implementación de propiedad Para ser más ordenados crearemos varias propiedad de tipo Privado, que se encargarán de recibir los datos de:   La Base de datos, el password, el usuario y el servidor private string _dbName; private string _serverName; private string _userID; private string _passWord;   private string dataBase { get { return _dbName; } set { _dbName = value; } }   private string serverName { get { return _serverName; } set { _serverName = value; } }   private string userName { get { return _userID; } set { _userID = value; } }   private string dataBasePassword { get { return _passWord; } set { _passWord = value; } } 3. Creación del Método para aplicar los datos de conexión Una vez que ya tenemos las propiedades, asignaremos a las variables los valores que se han recogido en el SqlConnectionStringBuilder. Y crearemos una variable de tipo ConnectionInfo para aplicar los datos de conexión. internal void ApplyInfo(ReportDocument _oRpt) { dataBase = builder.InitialCatalog; serverName = builder.DataSource; userName = builder.UserID; dataBasePassword = builder.Password;   Database oCRDb = _oRpt.Database; Tables oCRTables = oCRDb.Tables; //Table oCRTable = default(Table); TableLogOnInfo oCRTableLogonInfo = default(TableLogOnInfo); ConnectionInfo oCRConnectionInfo = new ConnectionInfo();   oCRConnectionInfo.DatabaseName = _dbName; oCRConnectionInfo.ServerName = _serverName; oCRConnectionInfo.UserID = _userID; oCRConnectionInfo.Password = _passWord;   foreach (Table oCRTable in oCRTables) { oCRTableLogonInfo = oCRTable.LogOnInfo; oCRTableLogonInfo.ConnectionInfo = oCRConnectionInfo; oCRTable.ApplyLogOnInfo(oCRTableLogonInfo);     }   }   4. Creación del report document y aplicación de la seguridad Una vez recogidos los datos y asignados, crearemos un elemento report document al cual le asignaremos el CrystalReportViewer y le aplicaremos los datos de acceso que obtuvimos anteriormente public void loadReport(string repName, CrystalReportViewer viewer) {   // attached our report to viewer and set database login. ReportDocument report = new ReportDocument(); report.Load(HttpContext.Current.Server.MapPath("~/Reports/" + repName)); ApplyInfo(report); viewer.ReportSource = report; } Al final, nuestra clase completa ser vería así public class CrystalRules { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(ConfigurationManager.ConnectionStrings["Fatchoy.Data.Properties.Settings.FatchoyConnectionString"].ConnectionString);   private string _dbName; private string _serverName; private string _userID; private string _passWord;   private string dataBase { get { return _dbName; } set { _dbName = value; } }   private string serverName { get { return _serverName; } set { _serverName = value; } }   private string userName { get { return _userID; } set { _userID = value; } }   private string dataBasePassword { get { return _passWord; } set { _passWord = value; } }   internal void ApplyInfo(ReportDocument _oRpt) { dataBase = builder.InitialCatalog; serverName = builder.DataSource; userName = builder.UserID; dataBasePassword = builder.Password;   Database oCRDb = _oRpt.Database; Tables oCRTables = oCRDb.Tables; //Table oCRTable = default(Table); TableLogOnInfo oCRTableLogonInfo = default(TableLogOnInfo); ConnectionInfo oCRConnectionInfo = new ConnectionInfo();   oCRConnectionInfo.DatabaseName = _dbName; oCRConnectionInfo.ServerName = _serverName; oCRConnectionInfo.UserID = _userID; oCRConnectionInfo.Password = _passWord;   foreach (Table oCRTable in oCRTables) { oCRTableLogonInfo = oCRTable.LogOnInfo; oCRTableLogonInfo.ConnectionInfo = oCRConnectionInfo; oCRTable.ApplyLogOnInfo(oCRTableLogonInfo);     }   }   public void loadReport(string repName, CrystalReportViewer viewer) {   // attached our report to viewer and set database login. ReportDocument report = new ReportDocument(); report.Load(HttpContext.Current.Server.MapPath("~/Reports/" + repName)); ApplyInfo(report); viewer.ReportSource = report; }       #region instance   private static CrystalRules m_instance;   // Properties public static CrystalRules Instance { get { if (m_instance == null) { m_instance = new CrystalRules(); } return m_instance; } }   public DataDataContext m_DataContext { get { return DataDataContext.Instance; } }     #endregion instance   }   Si bien, la solución no es robusta y no es la mas segura. En casos de uso como una intranet y cuando estamos contra tiempo, podría ser de gran ayuda.

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • SQL SERVER – A Quick Look at Logging and Ideas around Logging

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post on Logging. When someone talks about logging, personally I get lots of ideas about it. I have seen logging as a very generic term. Let me ask you this question first before I continue writing about logging. What is the first thing comes to your mind when you hear word “Logging”? Now ask the same question to the guy standing next to you. I am pretty confident that you will get  a different answer from different people. I decided to do this activity and asked 5 SQL Server person the same question. Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Output Clause The very first person replied output clause. Pretty interesting answer to start with. I see what exactly he was thinking. SQL Server 2005 has introduced a new OUTPUT clause. OUTPUT clause has access to inserted and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. Here are some references for Output Clause: OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE Reasons for Using Output Clause – Quiz Tips from the SQL Joes 2 Pros Development Series – Output Clause in Simple Examples Error Logs I was expecting someone to mention Error logs when it is about logging. The error log is the most looked place when there is any error either with the application or there is an error with the operating system. I have kept the policy to check my server’s error log every day. The reason is simple – enough time in my career I have figured out that when I am looking at error logs I find something which I was not expecting. There are cases, when I noticed errors in the error log and I fixed them before end user notices it. Other common practices I always tell my DBA friends to do is that when any error happens they should find relevant entries in the error logs and document the same. It is quite possible that they will see the same error in the error log  and able to fix the error based on the knowledge base which they have created. There can be many different kinds of error log files exists in SQL Server as well – 1) SQL Server Error Logs 2) Windows Event Log 3) SQL Server Agent Log 4) SQL Server Profile Log 5) SQL Server Setup Log etc. Here are some references for Error Logs: Recycle Error Log – Create New Log file without Server Restart SQL Error Messages Change Data Capture I got surprised with this answer. I think more than the answer I was surprised by the person who had answered me this one. I always thought he was expert in HTML, JavaScript but I guess, one should never assume about others. Indeed one of the cool logging feature is Change Data Capture. Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational ‘change tables’ rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made. Here are some references for Change Data Capture: Introduction to Change Data Capture (CDC) in SQL Server 2008 Tuning the Performance of Change Data Capture in SQL Server 2008 Download Script of Change Data Capture (CDC) CDC and TRUNCATE – Cannot truncate table because it is published for replication or enabled for Change Data Capture Dynamic Management View (DMV) I like this answer. If asked I would have not come up with DMV right away but in the spirit of the original question, I think DMV does log the data. DMV logs or stores or records the various data and activity on the SQL Server. Dynamic management views return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. One can get plethero of information from DMVs – High Availability Status, Query Executions Details, SQL Server Resources Status etc. Here are some references for Dynamic Management View (DMV): SQL SERVER – Denali – DMV Enhancement – sys.dm_exec_query_stats – New Columns DMV – sys.dm_os_windows_info – Information about Operating System DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28 DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module Transaction Log Impact Detection Using DMV – dm_tran_database_transactions Log Files I almost flipped with this final answer from my friend. This should be probably the first answer. Yes, indeed log file logs the SQL Server activities. One can write infinite things about log file. SQL Server uses log file with the extension .ldf to manage transactions and maintain database integrity. Log file ensures that valid data is written out to database and system is in a consistent state. Log files are extremely useful in case of the database failures as with the help of full backup file database can be brought in the desired state (point in time recovery is also possible). SQL Server database has three recovery models – 1) Simple, 2) Full and 3) Bulk Logged. Each of the model uses the .ldf file for performing various activities. It is very important to take the backup of the log files (along with full backup) as one never knows when backup of the log file come into the action and save the day! How to Stop Growing Log File Too Big Reduce the Virtual Log Files (VLFs) from LDF file Log File Growing for Model Database – model Database Log File Grew Too Big master Database Log File Grew Too Big SHRINKFILE and TRUNCATE Log File in SQL Server 2008 Can I just say I loved this month’s T-SQL Tuesday Question. It really provoked very interesting conversation around me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Saturday, March 27, 2010

    CodePlex Daily Summary for Saturday, March 27, 2010New ProjectsAlter gear SQL index Management: SQL Index management displays a list of indexes available for the chosen database and allows you to select an individual / group of indexes to be r...ASP League Ladder System: An ASP ladder / league system for online gaming league or real life leagues also.Augmented Reality Strategy Simulator: Augmented Reality Strategy Simulator is a software suite to promote computer aided strategy planning. Sports team can visualize their strategy usin...Boo syntax highlighting for Visual Studio 2010: Simple syntax hightlighting VSX add-in for Boo language in Visual Studio 2010.easySan: easySan zur einfachen Mitgliedsverwaltung im BRKFsUnit: FsUnit makes unit-testing with F# more enjoyable. It adds a special syntax to your favorite .NET testing framework.Laughing Dog XNA Framework: Laughing Dog is a simple to use, component based 2D framework for XNA game development. At present it is very early in development and as such is f...miniTodo: WPFでMVVMの練習にてきとうに作ったTODOアプリ 実用は無理です。My Common Library on .NET with CSharp: My Common Library on .NET with CSharp, it conclude database assecc, encrypt string, data caching, StringUtility, thank you for your view.Native code wrapping using c# : fsutil sparse commands: Ever thought about creating HUGE FILES for future use but felt bad for the wasted memory? Well, SPARSE FILES are the ANSWER! This FSUTIL SPARSE CO...Open SOA Platform: A centralized system for administering applications throught a SOA Enterprise Service Bus: Runtime environment (PROD, DEV, ...) , application and s...P-DBMS: Network and Database ProjectPraiseSight: PraiseSight is supposed to become a practical tool for churches to catalog an present their songs, lyrics and presentations on a beamer. The soluti...Pretty Good Frontend: Pretty Good Frontend is a sample frontend for ConfigMgr (SCCM) 2007 and MDT 2010 Zero Touch. S3Appender (Appender for Log4Net that Uses Amazon S3 For Storing Log Files): The S3Appender is a log4net appender that stores log events in either a MemoryStream or FileStream and sends them to S3 based on time intervals and...sEmit: sEmit (sms emitter) is an application written in C# which was built to send text messages. The project was founded in May 2009 by cansik. It works ...Silverlight RIA Tools: A tool set that generates a full RIA Solutions in Silverlightthommo cannon: Cannon for shooting down ThommosTianjin Polytechnic University Online Judge: Online Judge System Built on Microsoft technologies. Vision & Scope: A distributed OJ Solution on Windows and Cloud. Technologies used or planed...Tinare: Tinare is an byte encryption and decryption alogrithm. The input key is a string password.TinyPlug: Small Plugin Manager, written in C# Allows a project to define supported interfaces, and at runtime add plugins which support (inherit) these in...Utility niconv helps to convert text from one encoding to another: .NET implementation of GUN iconv console converter utility. The niconv program converts text from one encoding to another encoding. In the future r...WareFeed - Software Business Analytics: WareFeed is a simple but effective Software Business Analytics tool written in PHP and compatible others languages such as .NET, Java or Python. It...Y36API1: Semestralni projekt na Y36APINew ReleasesAlter gear SQL index Management: Setup 1.0.0: setup for first alpha releaseASP League Ladder System: ASPLeagueRelease_0_4_1: Release v 0.41Augmented Reality Strategy Simulator: Augmented Reality Strategy Simulator: Version 1.0 InstallerAutoAudit: AutoAudit 1.10e: Version 1.10e will be the final iteration of version 1 development. Version 2 will begin adding switches and options. Pleae email your suggestio...Boo syntax highlighting for Visual Studio 2010: Boo syntax VS 2010 - alpha: First release TODO: Multiline comments!Chargify.NET: Chargify.NET 0.6: Updated library, using Metered Components and updated Product information.Composer: V1.0.326.1000 Alpha: Initial Alpha release. Should be stable, with minor issues.CoNatural Components: CoNatural Components 1.6: Code fixes: Created helper classes to generate source code for type mapper/materializer. Fixed issue in optimized type materializer when loading ...CRM External View: 1.2: New Features in v1.2 release Password protected views. No more using Web Data Access role from v1. Filtering capabilities Caching for performan...Designit Video Embed Package: Release 1.1.0 beta1: You can now either have the video embeded directly in the template or have a preview in template that opens the video in a lightbox window.FsUnit: FsUnit 0.9.0 for NUnit: This release is for F# 2.0 and NUnit 2.5+.Laughing Dog XNA Framework: Laughing Dog 0.0.1: Laughing Dog - Alpla - v 0.0.1 First released version of the Laughing Dog framework.LiveUpload to Facebook: LiveUpload to Facebook 3.2: Version 3.2Become a fan on Facebook! Features Quickly and easily upload your photos and videos to Facebook, including any people tags added in Win...MapWindow6: MapWindow 6.0 msi March 26: This version adds the Join feature for creating a new "featureset" with attributes that are joined with attributes from a Excel data label named 'D...Mobile Broadband Logging Monitor: Mobile Broadband Logging Monitor 1.2.2: This edition supports: Newer and older editions of Birdstep Technology's EasyConnect HUAWEI Mobile Partner MWConn User defined location for s...Multiplayer Quiz: Release 1_6_351_0: A beta release of the next version. Please leave any errors in discussions or comments.Native code wrapping using c# : fsutil sparse commands: Fsutil sparse file native code - c sharp wrapper: Project Description A C# code wrapping a native code-Sparse files1 The code is about SPARSE files- the abillity to create huge files (for future us...Nice Libraries: 1.30 build 50325.01: Release 1.30 build 50325.01Pretty Good Frontend: Pretty Good Frontend binaries v1.0: This is the first public release of the Pretty Good Frontend binariesPylor: Pylor 0.1 alpha: This is the very first published version. I hope I can put a sample project soon.Quick Performance Monitor: Version 1.1 refresh: There was a typo or two in the sample batch file. Corrected now.Rapidshare Episode Downloader: RED v0.8.3: 0.8.1 introduced the ability to advance to the next episode. In 0.8.2 a bug was found that if episode number is less then 10, then the preceding 0...RapidWebDev - .NET Enterprise Software Development Infrastructure: RapidWebDev 1.52: RapidWebDev is an infrastructure helps to develop enterprise software solutions in Microsoft .NET easily and productively. This is the release vers...thommo cannon: game: gamethommo cannon: setup: setupthommo cannon: test: testTinare: Tinare DLL: Tinare DLL is a dynamic-link library written in C# which provides the functions to encrypt and decrypt a byte stream with tinare.WeatherBar: WeatherBar 2.1 [No Installation]: Minor changes to release 2.0 (http://weatherbar.codeplex.com/releases/view/42490). Fixed the bug that caused an exception to be thrown if the user...Most Popular ProjectsMetaSharpRawrWBFS ManagerASP.NET Ajax LibrarySilverlight ToolkitMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETMicrosoft Biology FoundationFarseer Physics Enginepatterns & practices: Composite WPF and SilverlightLINQ to TwitterTable2ClassFluent Ribbon Control SuiteNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers?

    - by Harish Gaur
    Did you miss this general session on Monday morning presented by Amit Zavery, VP of Oracle Fusion Middleware Product Management? There will be a recording made available shortly and in the meanwhile, here is a recap. Amit presented 5 strategies customers can leverage today to extend their applications. Figure 1: 5 Oracle Fusion Middleware strategies to extend Oracle Applications & Oracle Fusion Apps 1. Engage Everyone – Provide intuitive and social experience for application users using Oracle WebCenter 2. Extend Enterprise – Extend Oracle Applications to mobile devices using Oracle ADF Mobile 3. Orchestrate Processes – Automate key organization processes across on-premise & cloud applications using Oracle BPM Suite & Oracle SOA Suite 4. Secure the core – Provide single sign-on and self-service provisioning across multiple apps using Oracle Identity Management 5. Optimize Performance – Leverage Exalogic stack to consolidate multiple instance and improve performance of Oracle Applications Session included 3 demonstrations to illustrate these strategies. 1. First demo highlighted significance of mobile applications for unlocking existing investment in Applications such as EBS. Using a native iPhone application interacting with e-Business Suite, demo showed how expense approval can be mobile enabled with enhanced visibility using BI dashboards. 2. Second demo showed how you can extend a banking process in Siebel and Oracle Policy Automation with Oracle BPM Suite.Process starts in Siebel with a customer requesting a loan, and then jumps to OPA for loan recommendations and decision making and loan processing with approvals in handled in BPM Suite. Once approvals are completed Siebel is updated to complete the process. 3. Final demo showcased FMW components inside Fusion Applications, specifically WebCenter. Boeing, Underwriter Laboratories and Electronic Arts joined this quest and discussed 3 different approaches of leveraging Fusion Middleware stack to maximize their investment in Oracle Applications and/or Fusion Applications technology. Let’s briefly review what these customers shared during the session: 1. Extend Fusion Applications We know that Oracle Fusion Middleware is the underlying technology infrastructure for Oracle Fusion Applications. Architecturally, Oracle Fusion Apps leverages several components of Oracle Fusion Middleware from Oracle WebCenter for rich collaborative interface, Oracle SOA Suite & Oracle BPM Suite for orchestrating key underlying processes to Oracle BIEE for dash boarding and analytics. Boeing talked about how they are using Oracle BPM Suite 11g, a key component of Oracle Fusion Middleware with Oracle Fusion Apps to transform their supply chain. Tim Murnin, Director of Supply Chain talked about Boeing’s 5 year supply chain transformation journey. Boeing’s Integrated and Information Management division began with automation of critical RFQ process using Oracle BPM Suite. This 1st phase resulted in 38% reduction in labor costs for RFP. As a next step in this effort, Boeing is now creating a platform to enable electronic Order Management. Fusion Apps are playing a significant role in this phase. Boeing has gone live with Oracle Fusion Product Hub and efforts are underway with Oracle Fusion Distributed Order Orchestration (DOO). So, where does Oracle BPM Suite 11g fit in this equation? Let me explain. Business processes within Fusion Apps are designed using 2 standards: Business Process Execution Language (BPEL) and Business Process Modeling Notation (BPMN). These processes can be easily configured using declarative set of tools. Boeing leverages Oracle BPM Suite 11g (which supports BPMN 2.0) and Oracle SOA Suite (which supports BPEL) to “extend” these applications. Traditionally, customizations are done within an app using native technologies. But, instead of making process changes within Fusion Apps, Boeing has taken an approach of building “extensions” layer on top of the application. Fig 2: Boeing’s use of Oracle BPM Suite to orchestrate key supply chain processes across Fusion Apps 2. Maximize Oracle Applications investment Fusion Middleware appeals not only to Fusion Apps customers, but is also leveraged by Oracle E-Business Suite, PeopleSoft, Siebel and JD Edwards customers significantly. Using Oracle BPM Suite and Oracle SOA Suite is the recommended extension strategy for Oracle Fusion Apps and Oracle Applications Unlimited customers. Electronic Arts, E-Business Suite customer, spoke about their strategy to transform their order-to-cash process using Oracle SOA Suite, Oracle Foundation Packs and Oracle BAM. Udesh Naicker, Sr Director of IT at Elecronic Arts (EA), discussed how growth of social and digital gaming had started to put tremendous pressure on EA’s existing IT infrastructure. He discussed the challenge with millions of micro-transactions coming from several sources – Microsoft Xbox, Paypal, several service providers. EA found Order-2-Cash processes stretched to their limits. They lacked visibility into these transactions across the entire value chain. EA began by consolidating their E-Business Suite R11 instances into single E-Business Suite R12. EA needed to cater to a variety of service requirements, connectivity methods, file formats, and information latency. Their integration strategy was tactical, i.e., using file uploads, TIBCO, SQL scripts. After consolidating E-Business suite, EA standardized their integration approach with Oracle SOA Suite and Oracle AIA Foundation Pack. Oracle SOA Suite is the platform used to extend E-Business Suite R12 and standardize 60+ interfaces across several heterogeneous systems including PeopleSoft, Demantra, SF.com, Workday, and Managed EDI services spanning on-premise, hosted and cloud applications. EA believes that Oracle SOA Suite 11g based extension strategy has helped significantly in the followings ways: - It helped them keep customizations out of E-Business Suite, thereby keeping EBS R12 vanilla and upgrade safe - Developers are now proficient in technology which is also leveraged by Fusion Apps. This has helped them prepare for adoption of Fusion Apps in the future Fig 3: Using Oracle SOA Suite & Oracle e-Business Suite, Electronic Arts built new platform for order processing 3. Consolidate apps and improve scalability Exalogic is an optimal platform for customers to consolidate their application deployments and enhance performance. Underwriter Laboratories talked about their strategy to run their mission critical applications including e-Business Suite on Exalogic. Christian Anschuetz, CIO of Underwriter Laboratories (UL) shared how UL is on a growth path - $1B to $2.5B in 5 years- and planning a significant business transformation from a not-for-profit to a for-profit business. To support this growth, UL is planning to simplify its IT environment and the deployment complexity associated with ERP applications and technology it runs on. Their current applications were deployed on variety of hardware platforms and lacked comprehensive disaster recovery architecture. UL embarked on a mission to deploy E-Business Suite on Exalogic. UL’s solution is unique because it is one of the first to deploy a large number of Oracle applications and related Fusion Middleware technologies (SOA, BI, Analytical Applications AIA Foundation Pack and AIA EBS to Siebel UCM prebuilt integration) on the combined Exalogic and Exadata environment. UL is planning to move to a virtualized architecture toward the end of 2012 to securely host external facing applications like iStore Fig 4: Underwrites Labs deployed e-Business Suite on Exalogic to achieve performance gains Key takeaways are: - Fusion Middleware platform is certified with major Oracle Applications Unlimited offerings. Fusion Middleware is the underlying technological infrastructure for Fusion Apps - Customers choose Oracle Fusion Middleware to extend their applications (Apps Unlimited or Fusion Apps) to keep applications upgrade safe and prepare for Fusion Apps - Exalogic is an optimum platform to consolidate applications deployments and enhance performance

    Read the article

  • Why Fusion Middleware matters to Oracle Applications and Fusion Applications customers?

    - by Harish Gaur
    Did you miss this general session on Monday morning presented by Amit Zavery, VP of Oracle Fusion Middleware Product Management? There will be a recording made available shortly and in the meanwhile, here is a recap. Amit presented 5 strategies customers can leverage today to extend their applications. Figure 1: 5 Oracle Fusion Middleware strategies to extend Oracle Applications & Oracle Fusion Apps 1. Engage Everyone – Provide intuitive and social experience for application users using Oracle WebCenter 2. Extend Enterprise – Extend Oracle Applications to mobile devices using Oracle ADF Mobile 3. Orchestrate Processes – Automate key organization processes across on-premise & cloud applications using Oracle BPM Suite & Oracle SOA Suite 4. Secure the core – Provide single sign-on and self-service provisioning across multiple apps using Oracle Identity Management 5. Optimize Performance – Leverage Exalogic stack to consolidate multiple instance and improve performance of Oracle Applications Session included 3 demonstrations to illustrate these strategies. 1. First demo highlighted significance of mobile applications for unlocking existing investment in Applications such as EBS. Using a native iPhone application interacting with e-Business Suite, demo showed how expense approval can be mobile enabled with enhanced visibility using BI dashboards. 2. Second demo showed how you can extend a banking process in Siebel and Oracle Policy Automation with Oracle BPM Suite.Process starts in Siebel with a customer requesting a loan, and then jumps to OPA for loan recommendations and decision making and loan processing with approvals in handled in BPM Suite. Once approvals are completed Siebel is updated to complete the process. 3. Final demo showcased FMW components inside Fusion Applications, specifically WebCenter. Boeing, Underwriter Laboratories and Electronic Arts joined this quest and discussed 3 different approaches of leveraging Fusion Middleware stack to maximize their investment in Oracle Applications and/or Fusion Applications technology. Let’s briefly review what these customers shared during the session: 1. Extend Fusion Applications We know that Oracle Fusion Middleware is the underlying technology infrastructure for Oracle Fusion Applications. Architecturally, Oracle Fusion Apps leverages several components of Oracle Fusion Middleware from Oracle WebCenter for rich collaborative interface, Oracle SOA Suite & Oracle BPM Suite for orchestrating key underlying processes to Oracle BIEE for dash boarding and analytics. Boeing talked about how they are using Oracle BPM Suite 11g, a key component of Oracle Fusion Middleware with Oracle Fusion Apps to transform their supply chain. Tim Murnin, Director of Supply Chain talked about Boeing’s 5 year supply chain transformation journey. Boeing’s Integrated and Information Management division began with automation of critical RFQ process using Oracle BPM Suite. This 1st phase resulted in 38% reduction in labor costs for RFP. As a next step in this effort, Boeing is now creating a platform to enable electronic Order Management. Fusion Apps are playing a significant role in this phase. Boeing has gone live with Oracle Fusion Product Hub and efforts are underway with Oracle Fusion Distributed Order Orchestration (DOO). So, where does Oracle BPM Suite 11g fit in this equation? Let me explain. Business processes within Fusion Apps are designed using 2 standards: Business Process Execution Language (BPEL) and Business Process Modeling Notation (BPMN). These processes can be easily configured using declarative set of tools. Boeing leverages Oracle BPM Suite 11g (which supports BPMN 2.0) and Oracle SOA Suite (which supports BPEL) to “extend” these applications. Traditionally, customizations are done within an app using native technologies. But, instead of making process changes within Fusion Apps, Boeing has taken an approach of building “extensions” layer on top of the application. Fig 2: Boeing’s use of Oracle BPM Suite to orchestrate key supply chain processes across Fusion Apps 2. Maximize Oracle Applications investment Fusion Middleware appeals not only to Fusion Apps customers, but is also leveraged by Oracle E-Business Suite, PeopleSoft, Siebel and JD Edwards customers significantly. Using Oracle BPM Suite and Oracle SOA Suite is the recommended extension strategy for Oracle Fusion Apps and Oracle Applications Unlimited customers. Electronic Arts, E-Business Suite customer, spoke about their strategy to transform their order-to-cash process using Oracle SOA Suite, Oracle Foundation Packs and Oracle BAM. Udesh Naicker, Sr Director of IT at Elecronic Arts (EA), discussed how growth of social and digital gaming had started to put tremendous pressure on EA’s existing IT infrastructure. He discussed the challenge with millions of micro-transactions coming from several sources – Microsoft Xbox, Paypal, several service providers. EA found Order-2-Cash processes stretched to their limits. They lacked visibility into these transactions across the entire value chain. EA began by consolidating their E-Business Suite R11 instances into single E-Business Suite R12. EA needed to cater to a variety of service requirements, connectivity methods, file formats, and information latency. Their integration strategy was tactical, i.e., using file uploads, TIBCO, SQL scripts. After consolidating E-Business suite, EA standardized their integration approach with Oracle SOA Suite and Oracle AIA Foundation Pack. Oracle SOA Suite is the platform used to extend E-Business Suite R12 and standardize 60+ interfaces across several heterogeneous systems including PeopleSoft, Demantra, SF.com, Workday, and Managed EDI services spanning on-premise, hosted and cloud applications. EA believes that Oracle SOA Suite 11g based extension strategy has helped significantly in the followings ways: - It helped them keep customizations out of E-Business Suite, thereby keeping EBS R12 vanilla and upgrade safe - Developers are now proficient in technology which is also leveraged by Fusion Apps. This has helped them prepare for adoption of Fusion Apps in the future Fig 3: Using Oracle SOA Suite & Oracle e-Business Suite, Electronic Arts built new platform for order processing 3. Consolidate apps and improve scalability Exalogic is an optimal platform for customers to consolidate their application deployments and enhance performance. Underwriter Laboratories talked about their strategy to run their mission critical applications including e-Business Suite on Exalogic. Christian Anschuetz, CIO of Underwriter Laboratories (UL) shared how UL is on a growth path - $1B to $2.5B in 5 years- and planning a significant business transformation from a not-for-profit to a for-profit business. To support this growth, UL is planning to simplify its IT environment and the deployment complexity associated with ERP applications and technology it runs on. Their current applications were deployed on variety of hardware platforms and lacked comprehensive disaster recovery architecture. UL embarked on a mission to deploy E-Business Suite on Exalogic. UL’s solution is unique because it is one of the first to deploy a large number of Oracle applications and related Fusion Middleware technologies (SOA, BI, Analytical Applications AIA Foundation Pack and AIA EBS to Siebel UCM prebuilt integration) on the combined Exalogic and Exadata environment. UL is planning to move to a virtualized architecture toward the end of 2012 to securely host external facing applications like iStore Fig 4: Underwrites Labs deployed e-Business Suite on Exalogic to achieve performance gains Key takeaways are: - Fusion Middleware platform is certified with major Oracle Applications Unlimited offerings. Fusion Middleware is the underlying technological infrastructure for Fusion Apps - Customers choose Oracle Fusion Middleware to extend their applications (Apps Unlimited or Fusion Apps) to keep applications upgrade safe and prepare for Fusion Apps - Exalogic is an optimum platform to consolidate applications deployments and enhance performance TAGS: Fusion Apps, Exalogic, BPM Suite, SOA Suite, e-Business Suite Integration

    Read the article

  • ASP.NET MVC 3 Hosting :: Rolling with Razor in MVC v3 Preview

    - by mbridge
    Razor is an alternate view engine for asp.net MVC.  It was introduced in the “WebMatrix” tool and has now been released as part of the asp.net MVC 3 preview 1.  Basically, Razor allows us to replace the clunky <% %> syntax with a much cleaner coding model, which integrates very nicely with HTML.  Additionally, it provides some really nice features for master page type scenarios and you don’t lose access to any of the features you are currently familiar with, such as HTML helper methods. First, download and install the ASP.NET MVC Preview 1.  You can find this at http://www.microsoft.com/downloads/details.aspx?FamilyID=cb42f741-8fb1-4f43-a5fa-812096f8d1e8&displaylang=en. Now, follow these steps to create your first asp.net mvc project using Razor: 1. Open Visual Studio 2010 2. Create a new project.  Select File->New->Project (Shift Control N) 3. You will see the list of project types which should look similar to what’s shown:   4. Select “ASP.NET MVC 3 Web Application (Razor).”  Set the application name to RazorTest and the path to c:projectsRazorTest for this tutorial. If you select accidently select ASPX, you will end up with the standard asp.net view engine and template, which isn’t what you want. 5. For this tutorial, and ONLY for this tutorial, select “No, do not create a unit test project.”  In general, you should create and use a unit test project.  Code without unit tests is kind of like diet ice cream.  It just isn’t very good. Now, once we have this done, our brand new project will be created.    In all likelihood, Visual Studio will leave you looking at the “HomeController.cs” class, as shown below: Immediately, you should notice one difference.  The Index action used to look like: public ActionResult Index () { ViewData[“Message”] = “Welcome to ASP.Net MVC!”; Return View(); } While this will still compile and run just fine, ASP.Net MVC 3 has a much nicer way of doing this: public ActionResult Index() { ViewModel.Message = “Welcome to ASP.Net MVC!”; Return View(); } Instead of using ViewData we are using the new ViewModel object, which uses the new dynamic data typing of .Net 4.0 to allow us to express ourselves much more cleanly.  This isn’t a tutorial on ALL of MVC 3, but the ViewModel concept is one we will need as we dig into Razor. What comes in the box? When we create a project using the ASP.Net MVC 3 Template with Razor, we get a standard project setup, just like we did in ASP.NET MVC 2.0 but with some differences.  Instead of seeing “.aspx” view files and “.ascx” files, we see files with the “.cshtml” which is the default razor extension.  Before we discuss the details of a razor file, one thing to keep in mind is that since this is an extremely early preview, intellisense is not currently enabled with the razor view engine.  This is promised as an updated before the final release.  Just like with the aspx view engine, the convention of the folder name for a set of views matching the controller name without the word “Controller” still stands.  Similarly, each action in the controller will usually have a corresponding view file in the appropriate view directory.  Remember, in asp.net MVC, convention over configuration is key to successful development! The initial template organizes views in the following folders, located in the project under Views: - Account – The default account management views used by the Account controller.  Each file represents a distinct view. - Home – Views corresponding to the appropriate actions within the home controller. - Shared – This contains common view objects used by multiple views.  Within here, master pages are stored, as well as partial page views (user controls).  By convention, these partial views are named “_XXXPartial.cshtml” where XXX is the appropriate name, such as _LogonPartial.cshtml.  Additionally, display templates are stored under here. With this in mind, let us take a look at the index.cshtml file under the home view directory.  When you open up index.cshtml you should see 1:   @inherits System.Web.Mvc.WebViewPage 2:  @{ 3:          View.Title = "Home Page"; 4:       LayoutPage = "~/Views/Shared/_Layout.cshtml"; 5:   } 6:  <h2>@View.Message</h2> 7:  <p> 8:     To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC     9:    Website">http://asp.net/mvc</a>. 10:  </p> So looking through this, we observe the following facts: Line 1 imports the base page that all views (using Razor) are based on, which is System.Web.Mvc.WebViewPage.  Note that this is different than System.Web.MVC.ViewPage which is used by asp.net MVC 2.0 Also note that instead of the <% %> syntax, we use the very simple ‘@’ sign.  The View Engine contains enough context sensitive logic that it can even distinguish between @ in code and @ in an email.  It’s a very clean markup.  Line 2 introduces the idea of a code block in razor.  A code block is a scoping mechanism just like it is in a normal C# class.  It is designated by @{… }  and any C# code can be placed in between.  Note that this is all server side code just like it is when using the aspx engine and <% %>.  Line 3 allows us to set the page title in the client page’s file.  This is a new feature which I’ll talk more about when we get to master pages, but it is another of the nice things razor brings to asp.net mvc development. Line 4 is where we specify our “master” page, but as you can see, you can place it almost anywhere you want, because you tell it where it is located.  A Layout Page is similar to a master page, but it gains a bit when it comes to flexibility.  Again, we’ll come back to this in a later installment.  Line 6 and beyond is where we display the contents of our view.  No more using <%: %> intermixed with code.  Instead, we get to use very clean syntax such as @View.Message.  This is a lot easier to read than <%:@View.Message%> especially when intermixed with html.  For example: <p> My name is @View.Name and I live at @View.Address </p> Compare this to the equivalent using the aspx view engine <p> My name is <%:View.Name %> and I live at <%: View.Address %> </p> While not an earth shaking simplification, it is easier on the eyes.  As  we explore other features, this clean markup will become more and more valuable.

    Read the article

  • OpenCV install problems on Studio 12.04 - broken dependencies

    - by Will
    I'm trying to follow the Ubuntu OpenCV documentation at OpenCV. The provided script has a line which executed for some time, taking away more packages than I expected (such as ubuntu-studio video); sudo apt-get -qq remove ffmpeg x264 libx264-dev When the script gets to the line below, it bombs; sudo apt-get -qq install libopencv-dev build-essential checkinstall cmake pkg-config yasm libtiff4-dev libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils ffmpeg The error msg is; E: Unable to correct problems, you have held broken packages. I've since run Update-Manager, run sudo apt-get updates, rebooted, tried the above script line manually, and still no change. I've just run sudo apt-get install -f and nothing seemed to change. It did mention that some packages were no longer needed and could be removed by apt-get autoremove, so I ran that. It removed a number of packages, so I reran the install command above. Still same problem of held broken packages. I just ran sudo apt-get -u dist-upgrade Part of the response was; The following packages have been kept back: gstreamer0.10-ffmpeg I'm not sure what that means. I do know that it shows up in my Update-Manager and cannot be checked I then ran sudo dpkg --configure -a and then reran sudo apt-get -f install and the package was still not upgraded, though there was this very interesting comment; Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-ffmpeg : Depends: libavcodec53 (< 5:0) but it is not going to be installed or libavcodec-extra-53 (< 5:0) but 5:0.7.2-1ubuntu1+codecs1~oneiric2 is to be installed E: Unable to correct problems, you have held broken packages. Then I ran sudo apt-get -u dist-upgrade It showed I had one held package, so I ran; sudo apt-get -o Debug::pkgProblemResolver=yes dist-upgrade It also exited without upgrading the package, so I ran; sudo apt-get remove --dry-run gstreamer0.10-ffmpeg:i386 And it gave me; *The following packages will be REMOVED: arista gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded. Remv arista [0.9.7-3ubuntu1] Remv gstreamer0.10-ffmpeg [0.10.12-1ubuntu1]* But when I reran sudo apt-get -u dist-upgrade It showed the package was still there. *The following packages have been kept back: gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.* Update: Just went into Synaptic PM and completely removed gstreamer0.10-ffmpeg Reran sudo apt-get -u dist-upgrade And was told; 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. However, when I ran the original apt-get to install opencv (first code at the top of this question), it still gave me the same broken package errors. So I tried $ cat /etc/apt/sources.list # # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main # deb http://download.opensuse.org/repositories/home:/popinet/xUbuntu_11.04 ./ # disabled on upgrade to precise and then; $ cat /etc/apt/sources.list.d/* But I don't have enough reputation to post the results here (it says I need at least 10 reputation points to post more than 2 links), so I don't know how to provide the requested feedback. Then tried; $ sudo apt-get check [sudo] password for <abcd>: Reading package lists... Done Building dependency tree Reading state information... Done However, no resolution of the problem yet. What else do I need to do? Will an upgrade to Ubuntu Studio 13.xx solve this problem (or compound it)?

    Read the article

  • Portal And Content - Content Integration - Best Practices

    - by Stefan Krantz
    Lately we have seen an increase in projects that have failed to either get user friendly content integration or non satisfactory performance. Our intention is to mitigate any knowledge gap that our previous post might have left you with, therefore this post will repeat some recommendation or reference back to old useful post. Moreover this post will help you understand ground up how to design, architect and implement business enabled, responsive and performing portals with complex requirements on business centric information publishing. Design the Information Model The key to successful portal deployments is Information modeling, it's a key task to understand the use case you designing for, therefore I have designed a set of question you need to ask yourself or your customer: Question: Who will own the content, IT or Business? Answer: BusinessQuestion: Who will publish the content, IT or Business? Answer: BusinessQuestion: Will there be multiple publishers? Answer: YesQuestion: Are the publishers computer scientist?Answer: NoQuestion: How often do the information changes, daily, weekly, monthly?Answer: Daily, weekly If your answers to the questions matches at least 2, we strongly recommend you design your content with following principles: Divide your pages in to logical sections, where each section is marked with its purpose Assign capabilities to each section, does it contain text, images, formatting and/or is it static and is populated through other contextual information Select editor/design element type WYSIWYG - Rich Text Plain Text - non-format text Image - Image object Static List - static list of formatted informationDynamic Data List - assembled information from multiple data files through CMIS query The result of such design map could look like following below examples: Based on the outcome of the required elements in the design column 3 from the left you will now simply design a data model in WebCenter Content - Site Studio by creating a Region Definition structure matching your design requirements.For more information on how to create a Region definition see following post: Region Definition Post - note see instruction 7 for details. Each region definition can now be used to instantiate data files, a data file will hold the actual data for each element in the region definition. Another way you can see this is to compare the region definition as an extension to the metadata model in WebCenter Content for each data file item. Design content templates With a solid dependable information model we can now proceed to template creation and page design, in this phase focuses on how to place the content sections from the region definition on the page via a Content Presenter template. Remember by creating content presenter templates you will leverage the latest and most integrated technology WebCenter has to offer. This phase is much easier since the you already have the information model and design wire-frames to base the logic on, however there is still few considerations to pay attention to: Base the template on ADF and make only necessary exceptions to markup when required Leverage ADF design components for Tabs, Accordions and other similar components, this way the design in the content published areas will comply with other design areas based on custom ADF taskflows There is no performance impact when using meta data or region definition based data All data access regardless of type, metadata or xml data it can be accessed via the Content Presenter - Node. See below for applied examples on how to access data Access metadata property from Document - #{node.propertyMap['myProp'].value}myProp in this example can be for instance (dDocName, dDocTitle, xComments or any other available metadata) Access element data from data file xml - #{node.propertyMap['[Region Definition Name]:[Element name]'].asTextHtml}Region Definition Name is the expect region definition that the current data file is instantiatingElement name is the element value you like to grab from the data file I recommend you read following  useful post on content template topic:CMIS queries and template creation - note see instruction 9 for detailsStatic List template rendering For more information on templates:Single Item Content TemplateMulti Item Content TemplateExpression Language Internationalization Considerations When integrating content assets via content presenter you by now probably understand that the content item/data file is wired to the page, what is also pretty common at this stage is that the content item/data file only support one language since its not practical or business friendly to mix that into a complex structure. Therefore you will be left with a very common dilemma that you will have to either build a complete new portal for each locale, which is not an good option! However with little bit of information modeling and clear naming convention this can be addressed. Basically you can simply make sure that all content item/data file are named with a predictable naming convention like "Content1_EN" for the English rendition and "Content1_ES" for the Spanish rendition. This way through simple none complex customizations you will be able to dynamically switch the actual content item/data file just before rendering. By following proposed approach above you not only enable a simple mechanism for internationalized content you also preserve the functionality in the content presenter to support business accessible run-time publishing of information on existing and new pages. I recommend you read following useful post on Internationalization topics:Internationalize with Content Presenter Integrate with Review & Approval processes Today the Review and approval functionality and configuration is based out of WebCenter Content - Criteria Workflows. Criteria Workflows uses the metadata of the checked in document to evaluate if the document is under any review/approval process. So for instance if a Criteria Workflow is configured to force any documents with Version = "2" or "higher" and Content Type is "Instructions", any matching content item version on check in will now enter the workflow before getting released for general access. Few things to consider when configuring Criteria Workflows: Make sure to not trigger on version one for Content Items that are Data Files - if you trigger on version 1 you will not only approve an empty document you will also have a content presenter pointing to a none existing document - since the document will only be available after successful completion of the workflow Approval workflows sometimes requires more complex criteria, the recommendation if that is the case is that the meta data triggering such criteria is automatically populated, this can be achieved through many approaches including Content Profiles Criteria workflows are configured and managed in WebCenter Content Administration Applets where you can configure one or more workflows. When you configured Criteria workflows the Content Presenter will support the editors with the approval process directly inline in the "Contribution mode" of the portal. In addition to approve/reject and details of the task, the content presenter natively support the user to view the current and future version of the change he/she is approving. See below for example: Architectural recommendation To support review&approval processes - minimize the amount of data files per page Each CMIS query can consume significant time depending on the complexity of the query - minimize the amount of CMIS queries per page Use Content Presenter Templates based on ADF - this way you minimize the design considerations and optimize the usage of caching Implement the page in as few Data files as possible - simplifies publishing process, increases performance and simplifies release process Named data file (node) or list of named nodes when integrating to pages increases performance vs. querying for data Named data file (node) or list of named nodes when integrating to pages enables business centric page creation and publishing and reduces the need for IT department interaction Summary Just because one architectural decision solves a business problem it doesn't mean its the right one, when designing portals all architecture has to be in harmony and not impacting each other. For instance the most technical complex solution is not always the best since it will most likely defeat the business accessibility, performance or both, therefore the best approach is to first design for simplicity that even a non-technical user can operate, after that consider the performance impact and final look at the technology challenges these brings and workaround them first with out-of-the-box features, after that design and develop functions to complement the short comings.

    Read the article

  • Using Lightbox with _Screen

    Although, I have to admit that I discovered Bernard Bout's ideas and concepts about implementing a lightbox in Visual FoxPro quite a while ago, there was no "spare" time in active projects that allowed me to have a closer look into his solution(s). Luckily, these days I received a demand to focus a little bit more on this. This article describes the steps about how to integrate and make use of Bernard's lightbox class in combination with _Screen in Visual FoxPro. The requirement in this project was to be able to visually lock the whole application (_Screen area) and guide the user to an information that should not be ignored easily. Depending on the importance any current user activity should be interrupted and focus put onto the notification. Getting the "meat", eh, source code Please check out Bernard's blog on Foxite directly in order to get the latest and greatest version. As time of writing this article I use version 6.0 as described in this blog entry: The Fastest Lightbox Ever The Lightbox class is sub-classed from the imgCanvas class from the GdiPlusX project on VFPx and therefore you need to have the source code of GdiPlusX as well, and integrate it into your development environment. The version I use is available here: Release GDIPlusX 1.20 As soon as you open the bbGdiLightbox class the first it, VFP might ask you to update the reference to the gdiplusx.vcx. As we have the sources, no problem and you have access to Bernard's code. The class itself is pretty easy to understand, some properties that you do not need to change and three methods: Setup(), ShowLightbox() and BeforeDraw() The challenge - _Screen or not? Reading Bernard's article about the fastest lightbox ever, he states the following: "The class will only work on a form. It will not support any other containers" Really? And what about _Screen? Isn't that a form class, too? Yes, of course it is but nonetheless trying to use _Screen directly will fail. Well, let's have look at the code to see why: WITH This .Left = 0 .Top = 0 .Height = ThisForm.Height .Width = ThisForm.Width .ZOrder(0) .Visible = .F.ENDWITH During the setup of the lightbox as well as while capturing the image as replacement for your forms and controls, the object reference Thisform is used. Which is a little bit restrictive to my opinion but let's continue. The second issue lies in the method ShowLightbox() and introduced by the call of .Bitmap.FromScreen(): Lparameters tlVisiblilty* tlVisiblilty - show or hide (T/F)* grab a screen dump with controlsIF tlVisiblilty Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = IIF(ThisForm.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = IIF(ThisForm.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing loCaptureBmp = .Bitmap.FromScreen(ThisForm.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; ThisForm.Width ,; ThisForm.Height) ENDWITH * save it to a property This.capturebmp = loCaptureBmp ThisForm.SetAll("Visible",.F.) This.DraW() This.Visible = .T.ELSE ThisForm.SetAll("Visible",.T.) This.Visible = .F.ENDIF My first trials in using the class ended in an exception - GdiPlusError:OutOfMemory - thrown by the Bitmap object. Frankly speaking, this happened mainly because of my lack of knowledge about GdiPlusX. After reading some documentation, especially about the FromScreen() method I experimented a little bit. Capturing the visible area of _Screen actually was not the real problem but the dimensions I specified for the bitmap. The modifications - step by step First of all, it is to get rid of restrictive object references on Thisform and to change them into either This.Parent or more generic into This.oForm (even better: This.oControl). The Lightbox.Setup() method now sets the necessary object reference like so: *====================================================================* Initial setup* Default value: This.oControl = "This.Parent"* Alternative: This.oControl = "_Screen"*====================================================================With This .oControl = Evaluate(.oControl) If Vartype(.oControl) == T_OBJECT .Anchor = 0 .Left = 0 .Top = 0 .Width = .oControl.Width .Height = .oControl.Height .Anchor = 15 .ZOrder(0) .Visible = .F. EndIfEndwith Also, based on other developers' comments in Bernard articles on his lightbox concept and evolution I found the source code to handle the differences between a form and _Screen and goes into Lightbox.ShowLightbox() like this: *====================================================================* tlVisibility - show or hide (T/F)* grab a screen dump with controls*====================================================================Lparameters tlVisibility Local loControl m.loControl = This.oControl If m.tlVisibility Local loCaptureBmp As xfcBitmap Local lnTitleHeight, lnLeftBorder, lnTopBorder, lcImage, loImage lnTitleHeight = Iif(m.loControl.TitleBar = 1,Sysmetric(9),0) lnLeftBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(3)) lnTopBorder = Iif(m.loControl.BorderStyle < 2,0,Sysmetric(4)) With _Screen.System.Drawing If Upper(m.loControl.Name) == Upper("Screen") loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd) Else loCaptureBmp = .Bitmap.FromScreen(m.loControl.HWnd,; lnLeftBorder,; lnTopBorder+lnTitleHeight,; m.loControl.Width ,; m.loControl.Height) EndIf Endwith * save it to a property This.CaptureBmp = loCaptureBmp m.loControl.SetAll("Visible",.F.) This.Draw() This.Visible = .T. Else This.CaptureBmp = .Null. m.loControl.SetAll("Visible",.T.) This.Visible = .F. Endif {loadposition content_adsense} Are we done? Almost... Although, Bernard says it clearly in his article: "Just drop the class on a form and call it as shown." It did not come clear to my mind in the first place with _Screen, but, yeah, he is right. Dropping the class on a form provides a permanent link between those two classes, it creates a valid This.Parent object reference. Bearing in mind that the lightbox class can not be "dropped" on the _Screen, we have to create the same type of binding during runtime execution like so: *====================================================================* Create global lightbox component*==================================================================== Local llOk, loException As Exception m.llOk = .F. m.loException = .Null. If Not Vartype(_Screen.Lightbox) == "O" Try _Screen.AddObject("Lightbox", "bbGdiLightbox") Catch To m.loException Assert .F. Message m.loException.Message EndTry EndIf m.llOk = (Vartype(_Screen.Lightbox) == "O")Return m.llOk Through runtime instantiation we create a valid binding to This.Parent in the lightbox object and the code works as expected with _Screen. Ease your life: Use properties instead of constants Having a closer look at the BeforeDraw() method might wet your appetite to simplify the code a little bit. Looking at the sample screenshots in Bernard's article you see several forms in different colors. This got me to modify the code like so: *====================================================================* Apply the actual lightbox effect on the captured bitmap.*====================================================================If Vartype(This.CaptureBmp) == T_OBJECT Local loGfx As xfcGraphics loGfx = This.oGfx With _Screen.System.Drawing loGfx.DrawImage(This.CaptureBmp,This.Rectangle,This.Rectangle,.GraphicsUnit.Pixel) * change the colours as needed here * possible colours are (220,128,0,0),(220,0,0,128) etc. loBrush = .SolidBrush.New(.Color.FromArgb( ; This.Opacity, .Color.FromRGB(This.BorderColor))) loGfx.FillRectangle(loBrush,This.Rectangle) EndwithEndif Create an additional property Opacity to specify the grade of translucency you would like to have without the need to change the code in each instance of the class. This way you only need to change the values of Opacity and BorderColor to tweak the appearance of your lightbox. This could be quite helpful to signalize different levels of importance (ie. green, yellow, orange, red, etc...) of notifications to the users of the application. Final thoughts Using the lightbox concept in combination with _Screen instead of forms is possible. Already Jim Wiggins comments in Bernard's article to loop through the _Screen.Forms collection in order to cascade the lightbox visibility to all active forms. Good idea. But honestly, I believe that instead of looping all forms one could use _Screen.SetAll("ShowLightbox", .T./.F., "Form") with Form.ShowLightbox_Access method to gain more speed. The modifications described above might provide even more features to your applications while consuming less resources and performance. Additionally, the restrictions to capture only forms does not exist anymore. Using _Screen you are able to capture and cover anything. The captured area of _Screen does not include any toolbars, docked windows, or menus. Therefore, it is advised to take this concept on a higher level and to combine it with additional classes that handle the state of toolbars, docked windows and menus. Which I did for the customer's project.

    Read the article

  • Enabling Service Availability in WCF Services

    - by cibrax
    It is very important for the enterprise to know which services are operational at any given point. There are many factors that can affect the availability of the services, some of them are external like a database not responding or any dependant service not working. However, in some cases, you only want to know whether a service is up or down, so a simple heart-beat mechanism with “Ping” messages would do the trick. Unfortunately, WCF does not provide a built-in mechanism to support this functionality, and you probably don’t to implement a “Ping” operation in any service that you have out there. For solving this in a generic way, there is a WCF extensibility point that comes to help us, the “Operation Invokers”. In a nutshell, an operation invoker is the class responsible invoking the service method with a set of parameters and generate the output parameters with the return value. What I am going to do here is to implement a custom operation invoker that intercepts any call to the service, and detects whether a “Ping” header was attached to the message. If the “Ping” header is detected, the operation invoker returns a new header to tell the client that the service is alive, and the real operation execution is omitted. In that way, we have a simple heart beat mechanism based on the messages that include a "Ping” header, so the client application can determine at any point whether the service is up or down. My operation invoker wraps the default implementation attached by default to any operation by WCF. internal class PingOperationInvoker : IOperationInvoker { IOperationInvoker innerInvoker; object[] outputs = null; object returnValue = null; public const string PingHeaderName = "Ping"; public const string PingHeaderNamespace = "http://tellago.serviceModel"; public PingOperationInvoker(IOperationInvoker innerInvoker, OperationDescription description) { this.innerInvoker = innerInvoker; outputs = description.SyncMethod.GetParameters() .Where(p => p.IsOut) .Select(p => DefaultForType(p.ParameterType)).ToArray(); var returnValue = DefaultForType(description.SyncMethod.ReturnType); } private static object DefaultForType(Type targetType) { return targetType.IsValueType ? Activator.CreateInstance(targetType) : null; } public object Invoke(object instance, object[] inputs, out object[] outputs) { object returnValue; if (Invoke(out returnValue, out outputs)) { return returnValue; } else { return this.innerInvoker.Invoke(instance, inputs, out outputs); } } private bool Invoke(out object returnValue, out object[] outputs) { object untypedProperty = null; if (OperationContext.Current .IncomingMessageProperties.TryGetValue(HttpRequestMessageProperty.Name, out untypedProperty)) { var httpRequestProperty = untypedProperty as HttpRequestMessageProperty; if (httpRequestProperty != null) { if (httpRequestProperty.Headers[PingHeaderName] != null) { outputs = this.outputs; if (OperationContext.Current .IncomingMessageProperties.TryGetValue(HttpRequestMessageProperty.Name, out untypedProperty)) { var httpResponseProperty = untypedProperty as HttpResponseMessageProperty; httpResponseProperty.Headers.Add(PingHeaderName, "Ok"); } returnValue = this.returnValue; return true; } } } var headers = OperationContext.Current.IncomingMessageHeaders; if (headers.FindHeader(PingHeaderName, PingHeaderNamespace) > -1) { outputs = this.outputs; MessageHeader<string> header = new MessageHeader<string>("Ok"); var untyped = header.GetUntypedHeader(PingHeaderName, PingHeaderNamespace); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); returnValue = this.returnValue; return true; } returnValue = null; outputs = null; return false; } } The implementation above looks for the “Ping” header either in the Http Request or the Soap message. The next step is to implement a behavior for attaching this operation invoker to the services we want to monitor. [AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, AllowMultiple = false, Inherited = true)] public class PingBehavior : Attribute, IServiceBehavior, IOperationBehavior { public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters) { } public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) { } public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) { foreach (var endpoint in serviceDescription.Endpoints) { foreach (var operation in endpoint.Contract.Operations) { if (operation.Behaviors.Find<PingBehavior>() == null) operation.Behaviors.Add(this); } } } public void AddBindingParameters(OperationDescription operationDescription, BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior(OperationDescription operationDescription, ClientOperation clientOperation) { } public void ApplyDispatchBehavior(OperationDescription operationDescription, DispatchOperation dispatchOperation) { dispatchOperation.Invoker = new PingOperationInvoker(dispatchOperation.Invoker, operationDescription); } public void Validate(OperationDescription operationDescription) { } } As an operation invoker can only be added in an “operation behavior”, a trick I learned in the past is that you can implement a service behavior as well and use the “Validate” method to inject it in all the operations, so the final configuration is much easier and cleaner. You only need to decorate the service with a simple attribute to enable the “Ping” functionality. [PingBehavior] public class HelloWorldService : IHelloWorld { public string Hello(string name) { return "Hello " + name; } } On the other hand, the client application needs to send a dummy message with a “Ping” header to detect whether the service is available or not. In order to simplify this task, I created a extension method in the WCF client channel to do this work. public static class ClientChannelExtensions { const string PingNamespace = "http://tellago.serviceModel"; const string PingName = "Ping"; public static bool IsAvailable<TChannel>(this IClientChannel channel, Action<TChannel> operation) { try { using (OperationContextScope scope = new OperationContextScope(channel)) { MessageHeader<string> header = new MessageHeader<string>(PingName); var untyped = header.GetUntypedHeader(PingName, PingNamespace); OperationContext.Current.OutgoingMessageHeaders.Add(untyped); try { operation((TChannel)channel); var headers = OperationContext.Current.IncomingMessageHeaders; if (headers.Any(h => h.Name == PingName && h.Namespace == PingNamespace)) { return true; } else { return false; } } catch (CommunicationException) { return false; } } } catch (Exception) { return false; } } } This extension method basically adds a “Ping” header to the request message, executes the operation passed as argument (Action<TChannel> operation), and looks for the corresponding “Ping” header in the response to see the results. The client application can use this extension with a single line of code, var client = new ServiceReference.HelloWorldClient(); var isAvailable = client.InnerChannel.IsAvailable<IHelloWorld>((c) => c.Hello(null)); The “isAvailable” variable will tell the client application whether the service is available or not. You can download the complete implementation from this location.    

    Read the article

  • Metro: Introduction to the WinJS ListView Control

    - by Stephen.Walther
    The goal of this blog entry is to provide a quick introduction to the ListView control – just the bare minimum that you need to know to start using the control. When building Metro style applications using JavaScript, the ListView control is the primary control that you use for displaying lists of items. For example, if you are building a product catalog app, then you can use the ListView control to display the list of products. The ListView control supports several advanced features that I plan to discuss in future blog entries. For example, you can group the items in a ListView, you can create master/details views with a ListView, and you can efficiently work with large sets of items with a ListView. In this blog entry, we’ll keep things simple and focus on displaying a list of products. There are three things that you need to do in order to display a list of items with a ListView: Create a data source Create an Item Template Declare the ListView Creating the ListView Data Source The first step is to create (or retrieve) the data that you want to display with the ListView. In most scenarios, you will want to bind a ListView to a WinJS.Binding.List object. The nice thing about the WinJS.Binding.List object is that it enables you to take a standard JavaScript array and convert the array into something that can be bound to the ListView. It doesn’t matter where the JavaScript array comes from. It could be a static array that you declare or you could retrieve the array as the result of an Ajax call to a remote server. The following JavaScript file – named products.js – contains a list of products which can be bound to a ListView. (function () { "use strict"; var products = new WinJS.Binding.List([ { name: "Milk", price: 2.44 }, { name: "Oranges", price: 1.99 }, { name: "Wine", price: 8.55 }, { name: "Apples", price: 2.44 }, { name: "Steak", price: 1.99 }, { name: "Eggs", price: 2.44 }, { name: "Mushrooms", price: 1.99 }, { name: "Yogurt", price: 2.44 }, { name: "Soup", price: 1.99 }, { name: "Cereal", price: 2.44 }, { name: "Pepsi", price: 1.99 } ]); WinJS.Namespace.define("ListViewDemos", { products: products }); })(); The products variable represents a WinJS.Binding.List object. This object is initialized with a plain-old JavaScript array which represents an array of products. To avoid polluting the global namespace, the code above uses the module pattern and exposes the products using a namespace. The list of products is exposed to the world as ListViewDemos.products. To learn more about the module pattern and namespaces in WinJS, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/22/metro-namespaces-and-modules.aspx Creating the ListView Item Template The ListView control does not know how to render anything. It doesn’t know how you want each list item to appear. To get the ListView control to render something useful, you must create an Item Template. Here’s what our template for rendering an individual product looks like: <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> This template displays the product name and price from the data source. Normally, you will declare your template in the same file as you declare the ListView control. In our case, both the template and ListView are declared in the default.html file. To learn more about templates, see my earlier blog entry: http://stephenwalther.com/blog/archive/2012/02/27/metro-using-templates.aspx Declaring the ListView The final step is to declare the ListView control in a page. Here’s the markup for declaring a ListView: <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> You declare a ListView by adding the data-win-control to an HTML DIV tag. The data-win-options attribute is used to set two properties of the ListView. The ListView is associated with its data source with the itemDataSource property. Notice that the data source is ListViewDemos.products.dataSource and not just ListViewDemos.products. You need to associate the ListView with the dataSoure property. The ListView is associated with its item template with the help of the itemTemplate property. The ID of the item template — #productTemplate – is used to select the template from the page. Here’s what the complete version of the default.html page looks like: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>ListViewDemos</title> <!-- WinJS references --> <link href="//Microsoft.WinJS.0.6/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.0.6/js/base.js"></script> <script src="//Microsoft.WinJS.0.6/js/ui.js"></script> <!-- ListViewDemos references --> <link href="/css/default.css" rel="stylesheet"> <script src="/js/default.js"></script> <script src="/js/products.js" type="text/javascript"></script> <style type="text/css"> .product { width: 200px; height: 100px; border: white solid 1px; } </style> </head> <body> <div id="productTemplate" data-win-control="WinJS.Binding.Template"> <div class="product"> <span data-win-bind="innerText:name"></span> <span data-win-bind="innerText:price"></span> </div> </div> <div data-win-control="WinJS.UI.ListView" data-win-options="{ itemDataSource:ListViewDemos.products.dataSource, itemTemplate:select('#productTemplate') }"> </div> </body> </html> Notice that the page above includes a reference to the products.js file: <script src=”/js/products.js” type=”text/javascript”></script> The page above also contains a Template control which contains the ListView item template. Finally, the page includes the declaration of the ListView control. Summary The goal of this blog entry was to describe the minimal set of steps which you must complete to use the WinJS ListView control to display a simple list of items. You learned how to create a data source, declare an item template, and declare a ListView control.

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (I)

    - by ccasares
    Adding attachments to a HumanTask is a feature that exists in Oracle HWF (Human Workflow) since 10g. However, in 11g there have been many improvements on this feature and this entry will try to summarize them. Oracle BPM 11g 11.1.1.5.1 (aka PS4 Feature Pack or PS4FP) introduced two great features: Ability to link attachments at a Task scope or at a Process scope: "Task" attachments are only visible within the scope (lifetime) of a task. This means that, initially, any member of the assignment pattern of the Human Task will be able to handle (add, review or remove) attachments. However, once the task is completed, subsequent human tasks will not have access to them. This does not mean those attachments got lost. Once the human task is completed, attachments can be retrieved in order to, i.e., check them in to a Content Server or to inject them to a new and different human task. Aside note: a "re-initiated" human task will inherit comments and attachments, along with history and -optionally- payload. See here for more info. "Process" attachments are visible within the scope of the process. This means that subsequent human tasks in the same process instance will have access to them. Ability to use Oracle WebCenter Content (previously known as "Oracle UCM") as the backend for the attachments instead of using HWF database backend. This feature adds all content server document lifecycle capabilities to HWF attachments (versioning, RBAC, metadata management, etc). As of today, only Oracle WCC is supported. However, Oracle BPM Suite does include a license of Oracle WCC for the solely usage of document management within BPM scope. Here are some code samples that leverage the above features. Retrieving uploaded attachments -Non UCM- Non UCM attachments (default ones or those that have existed from 10g, and are stored "as-is" in HWK database backend) can be retrieved after the completion of the Human Task. Firstly, we need to know whether any attachment has been effectively uploaded to the human task. There are two ways to find it out: Through an XPath function: Checking the execData/attachment[] structure. For example: Once we are sure one ore more attachments were uploaded to the Human Task, we want to get them. In this example, by "get" I mean to get the attachment name and the payload of the file. Aside note: Oracle HWF lets you to upload two kind of [non-UCM] attachments: a desktop document and a Web URL. This example focuses just on the desktop document one. In order to "retrieve" an uploaded Web URL, you can get it directly from the execData/attachment[] structure. Attachment content (payload) is retrieved through the getTaskAttachmentContents() XPath function: This example shows how to retrieve as many attachments as those had been uploaded to the Human Task and write them to the server using the File Adapter service. The sample process excerpt is as follows:  A dummy UserTask using "HumanTask1" Human Task followed by a Embedded Subprocess that will retrieve the attachments (we're assuming at least one attachment is uploaded): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail: We've defined an XSD structure that will hold the attachments (both name and payload): Then, we can create a BusinessObject based on such element (attachmentCollection) and create a variable (named attachmentBPM) of such BusinessObject type. We will also need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a variable of type TaskExecutionData... ...and copy the HumanTask output execData to it: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentBPM variable with the name of each attachment and setting an empty value to the payload: Please note that we're using the XSLT for-each node to create as many target structures as necessary. Also note that we're setting an Empty text to the payload variable. The reason for this is to make sure the <payload></payload> tag gets created. This is needed when we map the payload to the XML variable later. Aside note: We are assuming that we're retrieving non-UCM attachments. However in real life you might want to check the type of attachment you're handling. The execData/attachment[]/storageType contains the values "UCM" for UCM type attachments, "TASK" for non-UCM ones or "URL" for Web URL ones. Those values are part of the "Ext.Com.Oracle.Xmlns.Bpel.Workflow.Task.StorageTypeEnum" enumeration. Once we have fed the attachmentsBPM structure and so it now contains the name of each of the attachments, it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsBPM/attachment[] element: In every iteration we will use a Script activity to map the corresponding payload element with the result of the XPath function getTaskAttachmentContents(). Please, note how the target array element is indexed with the loopCounter predefined variable, so that we make sure we're feeding the right element during the array iteration:  The XPath function used looks as follows: hwf:getTaskAttachmentContents(bpmn:getDataObject('UserTask1LocalExecData')/ns1:systemAttributes/ns1:taskId, bpmn:getDataObject('attachmentsBPM')/ns:attachment[bpmn:getActivityInstanceAttribute('SUBPROCESS3067107484296', 'loopCounter')]/ns:fileName)  where the input parameters are: taskId of the just completed Human Task attachment name we're retrieving the payload from array index (loopCounter predefined variable)  Aside note: The reason whereby we're iterating the execData/attachment[] structure through embedded subprocess and not, i.e., using XSLT and for-each nodes, is mostly because the getTaskAttachmentContents() XPath function is currently not available in XSLT mappings. So all this example might be considered as a workaround until this gets fixed/enhanced in future releases. Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsBPM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsBPM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target):  Second, we must set the target filename using the Service Properties dialog box:  Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Handling UCM attachments will be part of a different and upcoming blog entry. Once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • ASP.NET MVC JavaScript Routing

    - by zowens
    Have you ever done this sort of thing in your ASP.NET MVC view? The weird thing about this isn’t the alert function, it’s the code block containing the Url formation using the ASP.NET MVC UrlHelper. The terrible thing about this experience is the obvious lack of IntelliSense and this ugly inline JavaScript code. Inline JavaScript isn’t portable to other pages beyond the current page of execution. It is generally considered bad practice to use inline JavaScript in your public-facing pages. How ludicrous would it be to copy and paste the entire jQuery code base into your pages…? Not something you’d ever consider doing. The problem is that your URLs have to be generated by ASP.NET at runtime and really can’t be copied to your JavaScript code without some trickery. How about this? Does the hard-coded URL bother you? It really bothers me. The typical solution to this whole routing in JavaScript issue is to just hard-code your URLs into your JavaScript files and call it done. But what if your URLs change? You have to now go an track down the places in JavaScript and manually replace them. What if you get the pattern wrong? Do you have tests around it? This isn’t something you should have to worry about.   The Solution To Our Problems The solution is to port routing over to JavaScript. Does that sound daunting to you? It’s actually not very hard, but I decided to create my own generator that will do all the work for you. What I have created is a very basic port of the route formation feature of ASP.NET routing. It will generate the formatted URLs based on your routing patterns. Here’s how you’d do this: Does that feel familiar? It looks a lot like something you’d do inside of your ASP.NET MVC views… but this is inside of a JavaScript file… just a plain ol’ .js file.  Your first question might be why do you have to have that “.toUrl()” thing. The reason is that I wanted to make POST and GET requests dead simple. Here’s how you’d do a POST request (and the same would work with a GET request):   The first parameter is extra data passed to the post request and the second parameter is a function that handles the success of the POST request. If you’re familiar with jQuery’s Ajax goodness, you’ll know how to use it. (if not, check out http://api.jquery.com/jQuery.Post/ and the parameters are essentially the same). But we still haven’t gotten rid of the magic strings. We still have controller names and action names represented as strings. This is going to blow your mind… If you’ve seen T4MVC, this will look familiar. We’re essentially doing the same sort of thing with my JavaScript router, but we’re porting the concept to JavaScript. The good news is that parameters to the controllers are directly reflected in the action function, just like T4MVC. And the even better news… IntlliSense is easily transferred to the JavaScript version if you’re using Visual Studio as your JavaScript editor. The additional data parameter gives you the ability to pass extra routing data to the URL formatter.   About the Magic You may be wondering how this all work. It’s actually quite simple. I’ve built a simple jQuery pluggin (called routeManager) that hangs off the main jQuery namespace and routes all the URLs. Every time your solution builds, a routing file will be generated with this pluggin, all your route and controller definitions along with your documentation. Then by the power of Visual Studio, you get some really slick IntelliSense that is hard to live without. But there are a few steps you have to take before this whole thing is going to work. First and foremost, you need a reference to the JsRouting.Core.dll to your projects containing controllers or routes. Second, you have to specify your routes in a bit of a non-standard way. See, we can’t just pull routes out of your App_Start in your Global.asax. We force you to build a route source like this: The way we determine the routes is by pulling in all RouteSources and generating routes based upon the mapped routes. There are various reasons why we can’t use RouteCollection (different post for another day)… but in this case, you get the same route mapping experience. Converting the RouteSource to a RouteCollection is trivial (there’s an extension method for that). Next thing you have to do is generate a documentation XML file. This is done by going to the project settings, going to the build tab and clicking the checkbox. (this isn’t required, but nice to have). The final thing you need to do is hook up the generation mechanism. Pop open your project file and look for the AfterBuild step. Now change the build step task to look like this: The “PathToOutputExe” is the path to the JsRouting.Output.exe file. This will change based on where you put the EXE. The “PathToOutputJs” is a path to the output JavaScript file. The “DicrectoryOfAssemblies” is a path to the directory containing controller and routing DLLs. The JsRouting.Output.exe executable pulls in all these assemblies and scans them for controllers and route sources.   Now that wasn’t too bad, was it :)   The State of the Project This is definitely not complete… I have a lot of plans for this little project of mine. For starters, I need to look at the generation mechanism. Either I will be creating a utility that will do the project file manipulation or I will go a different direction. I’d like some feedback on this if you feel partial either way. Another thing I don’t support currently is areas. While this wouldn’t be too hard to support, I just don’t use areas and I wanted something up quickly (this is, after all, for a current project of mine). I’ll be adding support shortly. There are a few things that I haven’t covered in this post that I will most certainly be covering in another post, such as routing constraints and how these will be translated to JavaScript. I decided to open source this whole thing, since it’s a nice little utility I think others should really be using. Currently we’re using ASP.NET MVC 2, but it should work with MVC 3 as well. I’ll upgrade it as soon as MVC 3 is released. Along those same lines, I’m investigating how this could be put on the NuGet feed. Show me the Bits! OK, OK! The code is posted on my GitHub account. Go nuts. Tell me what you think. Tell me what you want. Tell me that you hate it. All feedback is welcome! https://github.com/zowens/ASP.NET-MVC-JavaScript-Routing

    Read the article

  • Tailoring the Oracle Fusion Applications User Interface with Oracle Composer

    - by mvaughan
    By Killian Evers, Oracle Applications User Experience Changing the user interface (UI) is one of the most common modifications customers perform to Oracle Fusion Applications. Typically, customers add or remove a field based on their needs. Oracle makes the process of tailoring easier for customers, and reduces the burden for their IT staff, which you can read about on the Usable Apps website or in an earlier VoX post.This is the first in a series of posts that will talk about the tools that Oracle has provided for tailoring with its family of composers. These tools are designed for business systems analysts, and they allow employees other than IT staff to make changes in an upgrade-safe and patch-friendly manner. Let’s take a deep dive into one of these composers, the Oracle Composer. Oracle Composer allows business users to modify existing UIs after they have been deployed and are in use. It is an integral component of our SaaS offering. Using Oracle Composer, users can control:     •    Who sees the changes     •    When the changes are made     •    What changes are made Change for me, change for you, change for all of youOne of the most powerful aspects of Oracle Composer is its flexibility. Oracle uses Oracle Composer to make changes for a user or group of users – those who see the changes. A user of Oracle Fusion Applications can make changes to the user interface at runtime via Oracle Composer, and these changes will remain every time they log into the system. For example, they can rearrange certain objects on a page, add and remove designated content, and save queries.Business systems analysts can make changes to Oracle Fusion Application UIs for groups of users or all users. Oracle’s Fusion Middleware Metadata Services (MDS) stores these changes and retrieves them at runtime, merging customizations with the base metadata and revealing the final experience to the end user. A tailored application can have multiple customization layers, and some layers can be specific to certain Fusion Applications. Some examples of customization layers are: site, organization, country, or role. Customization layers are applied in a specific order of precedence on top of the base application metadata. This image illustrates how customization layers are applied.What time is it?Users make changes to UIs at design time, runtime, and design time at runtime. Design time changes are typically made by application developers using an integrated development environment, or IDE, such as Oracle JDeveloper. Once made, these changes are then deployed to managed servers by application administrators. Oracle Composer covers the other two areas: Runtime changes and design time at runtime changes. When we say users are making changes at runtime, we mean that the changes are made within the running application and take effect immediately in the running application. A prime example of this ability is users who make changes to their running application that only affect the UIs they see. What is new with Oracle Composer is the last area: Design time at runtime.  A business systems analyst can make changes to the UIs at runtime but does not have to make those changes immediately to the application. These changes are stored as metadata, separate from the base application definitions. Customizations made at runtime can be saved in a sandbox so that the changes can be isolated and validated before being published into an environment, without the need to redeploy the application. What can I do?Oracle Composer can be run in one of two modes. Depending on which mode is chosen, you may have different capabilities available for changing the UIs. The first mode is view mode, the most common default mode for most pages. This is the mode that is used for personalizations or user customizations. Users can access this mode via the Personalization link (see below) in the global region on Oracle Fusion Applications pages. In this mode, you can rearrange components on a page with drag-and-drop, collapse or expand components, add approved external content, and change the overall layout of a page. However, all of the changes made this way are exclusive to that particular user.The second mode, edit mode, is typically made available to select users with access privileges to edit page content. We call these folks business systems analysts. This mode is used to make UI changes for groups of users. Users with appropriate privileges can access the edit mode of Oracle Composer via the Administration menu (see below) in the global region on Oracle Fusion Applications pages. In edit mode, users can also add components, delete components, and edit component properties. While in edit mode in Oracle Composer, there are two views that assist the business systems analyst with making UI changes: Design View and Source View (see below). Design View, the default view, is a WYSIWYG rendering of the page and its content. The business systems analyst can perform these actions: Add content – including custom content like a portlet displaying news or stock quotes, or predefined content delivered from Oracle Fusion Applications (including ADF components and task flows) Rearrange content – performed via drag-and-drop on the page or by using the actions menu of a component or portlet to move content around Edit component properties and parameters – for specific components, control the visual properties such as text or display labels, or parameters such as RSS feeds Hide or show components – hidden components can be re-shown Delete components Change page layout – users can select from eight pre-defined layouts Edit page properties – create or edit a page’s parameters and display properties Reset page customizations – remove edits made to the page in the current layer and/or reset the page to a previous state. Detailed information on each of these capabilities and the additional actions not covered in the list above can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.This image shows what the screen looks like in Design View.Source View, the second option in the edit mode of Oracle Composer, provides a WYSIWYG and a hierarchical rendering of page components in a component navigator. In Source View, users can access and modify properties of components that are not otherwise selectable in Design View. For example, many ADF Faces components can be edited only in Source View. Users can also edit components within a task flow. This image shows what the screen looks like in Source View.Detailed information on Source View can be found in the Oracle® Fusion Middleware Developer's Guide for Oracle WebCenter.Oracle Composer enables any application or portal to be customized or personalized after it has been deployed and is in use. It is designed to be extremely easy to use so that both business systems analysts and users can edit Oracle Fusion Applications pages with a few clicks of the mouse. Oracle Composer runs in all modern browsers and provides a rich, dynamic way to edit JSF application and portal pages.From the editor: The next post in this series about composers will be on Data Composer. You can also catch Killian speaking about extensibility at OpenWorld 2012 and in her Faces of Fusion video.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • top tweets WebLogic Partner Community – June 2013

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity. Please feel free to send us your news! Lucas Jellema ?Getting started with Java EE 7: The Tutorial http://docs.oracle.com/javaee/7/tutorial/doc/home.htm … Simon Haslam I'm looking forward to starting a "WLS on ODA" proof of concept - some ideas for testing: http://www.veriton.co.uk/roller/fmw/entry/virtualised_oda_proof_of_concept … Frank Munz ?It's not too late - I just submitted two presentations about #OracleWebLogic and #Coherence for the @DOAGeV conference in Nürnberg. Did you? Arun Gupta ?Tyrus 1.0 User Guide: https://tyrus.java.net/documentation/1.0/user-guide.html … #WebSocket #JavaEE7 #GlassFish Arun Gupta #JavaEE7 Launch Webinar Technical Breakout replays on Youtube: http://bit.ly/12uUicT JSON 1.0 , EJB .2, Batch 1.0 more coming! OracleBlogs ?FREE Virtual Developer Day: Java SE, Java EE, Java Emebedded on Jun 19th and 25th http://ow.ly/2xBkwV Markus Eisele #Oracle #JavaSE Critical Patch Update Pre-Release Announcement - June 2013 http://www.oracle.com/technetwork/topics/security/javacpujun2013-1899847.html … #security OracleSupport_WLS ?Simple Custom #JMX MBeans with #WebLogic 12c and #Spring http://pub.vitrue.com/3kEr Oracle Technet Building Java HTML5/WebSocket Applications with JSR 356 - 4pm - Grand Ballroom Salon A/B #qconnewyork WebLogic Community Oracle Fusion Middleware (OFM) 11g (11.1.1.7) Starter Kit available & Customizable Demos http://wp.me/p1LMIb-BK Oracle Technet #Java EE 7: Moving Java Forward for the Enterprise | @java http://pub.vitrue.com/tHiM OTNArchBeat ?Oracle Forms to ADF Modernization Reference - Convero (AMEC) Project | @AndrejusB http://pub.vitrue.com/lZPR WebLogic Community ?ExaLogic In Memory Applications & Whitepapers Building Large Scale E-Commerce Platforms & Rethink the Entire Application Lifecycle… WebLogic Community ?Coherence YouTube videos http://wp.me/p1LMIb-BG Arun Gupta ?WARNING: Next 2 days are going to be loaded with #JavaEE7 launch related tweets, and offline next week! JDeveloper & ADF Using Contextual Event in Oracle ADF http://dlvr.it/3Vpybr Oracle WebLogic Check out new blog on #hybrid_cloud & why choice is important http://bit.ly/1b1QGhL Andrejus Baranovskis Oracle Forms to ADF Modernization Reference - Convero (AMEC) Project http://fb.me/1M9iWNmAw WebLogic Community WebLogic on Oracle Database Appliance by Frances Zhao http://wp.me/p1LMIb-BE OTNArchBeat ?New: A-Team Chronicles >> A great resource for technical content covering Oracle Fusion Middleware / Fusion Apps http://pub.vitrue.com/qbzS Oracle for Partners ?Take Java To The Edge: Java Virtual Developer Day – June 19 & June 25 http://bit.ly/19fGlSX Adam Bien ?Looking forward to tomorrow's #javaee7 + #angularjs #html5 marriage at #jpoint. See you there: http://www.jpoint.nl/meetingpoint/editie-2013#sessie-1 … shay shmeltzer ?There is a new patch for the #Oracle #ADF Mobile extension - use help->check for updates to get it. Frank Munz ?Not using @OracleWebLogic 12c yet? Australia does! Reviews from my @AUSOUG workshops in Brisbane, Adelaide and Perth. http://goo.gl/BfVc4 Arun Gupta ?WebSocket, Server-Sent Events, #JavaEE7 sessions accepted at #jaxlondon ... that's gonna be at least third trip to London this year! WebLogic Community SPARC T5-8 Delivers Best Single System SPECjEnterprise2010 Benchmark running WebLogic 12c http://wp.me/p1LMIb-BC WebLogic Community The Ultimate Java EE Event - 16 Power Workshops mit allen wichtigen Java-EE-Themen http://wp.me/p1LMIb-BY Oracle WebLogic ?@OracleWebLogic 7 Jun New Blog Post: Using try-with-resources with JDBC objects http://ow.ly/2xryb5 JDeveloper & ADF Switching Lists of Values http://dlvr.it/3PbCkw WebLogic Community ?YouTube channel Learning Oracle's ADF http://wp.me/p1LMIb-zA Markus Eisele [GER] RT @heisedc: #Java-Entwicklung in #Oracles Public #Cloud http://heise.de/-1866388/ftw OracleBlogs ?Coherence Incubator & Community Source Code & Release Documentation http://ow.ly/2x2fXK chriscmuir ?New blog post: Migrating ADF Mobile apps from 1.0 to 1.1 https://blogs.oracle.com/onesizedoesntfitall/entry/migrating_adf_mobile_apps_from … JDeveloper & ADF ?ADF JavaScript Partitioning for Performance http://dlvr.it/3Trw15 WebLogic Community WebLogic Server Security Workshop June 27th 2013 Germany http://wp.me/p1LMIb-C7 WebLogic Community Oracle Optimized Solution for WebLogic Server 12c http://wp.me/p1LMIb-BA WebLogic Community Virtualize and Run Your Forms Applications in the Cloud - Now On Demand http://wp.me/p1LMIb-By Lucas Jellema Innteresting presentation on various aspects of end user assistance in Fusion Applications (ADF based): http://www.slideshare.net/uobroin/ouag-ireland-final2012slideshare … Adam Bien ?Summer Of JavaEE Workshops And Gigs: Free Hacking night:11.06.2013, Utrecht JavaEE 7 Meets HTML 5 and AngularJ... http://bit.ly/11XRjt4 WebLogic Community ?Real World ADF Design & Architecture Principles Trainings Germany, Poland & Portugal http://wp.me/p1LMIb-Bw Oracle for Partners ?JAVA Virtual Developer Day – June 19 & June 25 - Watch educational content and engage with Oracle experts online https://oracle.6connex.com/portal/java2013/login/?langR=en_US&mcc=OPNNSL … Markus Eisele ?[blog] Java EE 7 is final. Thoughts, Insights and further Pointers. http://dlvr.it/3SrxnB #javaee7 WebLogic Community Oracle takes the top spot for market share in the Application Server Market Segment for 2012 http://wp.me/p1LMIb-Bu OTNArchBeat ?Oracle ACE Director @LucasJellema is "very pleasantly surprised" with the new ADF Academy. http://pub.vitrue.com/8fad chriscmuir ?Sell out crowd for our ADF architecture course in Munich #adfarch pic.twitter.com/zhNtQJ25JV Markus Eisele ?[blog] New German Article: Java 7 Update 21 Security Improvements http://dlvr.it/3Sc8V9 #java #heise #security Markus Eisele ?[blog] New German Article: Oracle Java Cloud Service http://dlvr.it/3Sc20V #java #heise #OracleCloud OracleSupport_WLS ?Troubleshooting and Tuning with #WebLogic - Developer Webcast now available on #Youtube http://pub.vitrue.com/GSOy Andrejus Baranovskis New ADF Academy - Impressive Concept for ADF eLearning http://fb.me/2kYSMKKR5 OracleSupport_WLS ?Removing a #weblogic domain properly http://pub.vitrue.com/ZndM WebLogic Community WebLogic Partner Community Newsletter May 2013 http://wp.me/p1LMIb-Bp Oracle WebLogic ?Blog: Troubleshooting tools Part 3- Heap Dumps #Oracle #WebLogic Read the series http://bit.ly/14CQSD2 Oracle WebLogic ?Blog: #WebLogic_Server on #Oracle_Database_Appliance- How to conjure a WebLogic cluster- http://bit.ly/11fciHA Oracle WebLogic ?Check out new cool features in Oracle Traffic Director- http://bit.ly/11fbz9h WebLogic Community Additional new material WebLogic Community April 2013 http://wp.me/p1LMIb-zM WebLogic Community New WebLogic references - we want yours http://wp.me/p1LMIb-zK OracleSupport_WLS ?#Weblogic Session Replication jsession ID and F5 http://pub.vitrue.com/dWZp OracleBlogs ?top tweets WebLogic Partner Community May 2013 http://ow.ly/2xc8M5 WebLogic Community Welcome to the Spring edition of Oracle Scene http://wp.me/p1LMIb-zE Andreas Koop ?[blog post] ADF: Static Values View Object does not show any values (solved) http://bit.ly/14RDZ8p OracleBlogs ?ADF Mobile - accessing the SQLite database http://ow.ly/2x85r0 OracleSupport_WLS Youtube channel- Troubleshooting and Tuning with #WebLogic.#JRockit #SOAP #JRF http://pub.vitrue.com/qMxu Arun Gupta Next Java Magazine is all about #JavaEE7...productivity, HTML5, WebSocket, Batch & more. Subscribe http://ow.ly/lkD5D (@Oraclejavamag) Oracle WebLogic How to configure a #WebLogic cluster on #Oracle_Database_Appliance? It’s easy, read how. http://bit.ly/11fciHA Oracle WebLogic ?Blog: How to use Heap Dumps to troubleshooting memory leaks- #Oracle #WebLogic_Server http://bit.ly/14CQSD2 OracleBlogs ?Over 100 Images To Be Added to NetBeans Platform Showcase http://ow.ly/2x7Fvp Lucas Jellema A new release of the ADF EMG Task Flow Tester is now available for both JDeveloper 11 R1 and R2. https://java.net/projects/adf-task-flow-tester/pages/GettingStarted … WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • How to Export Multiple Contacts in Outlook 2013 to Multiple vCards or a Single vCard

    - by Lori Kaufman
    We’ve shown you how to export a contact to and import a contact from a vCard (.vcf) file. However, what if you want to export multiple contacts at the same time to multiple vCard files or even a single vCard file? Outlook doesn’t allow you to directly export all your contacts as vCard files or as a single vCard file, but there is a way to accomplish both tasks. Export Multiple Contacts to Multiple vCard Files Outlook allows you to forward contact information as a vCard. You can also select multiple contacts and forward them all at once. This feature allows you to indirectly export multiple contacts at once to multiple vCard files. Click the People tab to access your contacts. Select all the contacts you want to export using the Shift and Ctrl keys as needed. Select Contacts the same way you would select files in Windows Explorer. Click Forward Contact in the Share section on the Home tab and select As a Business Card from the drop-down menu. The selected contacts attached to a new email message as .vcf files. To select all the attached .vcf files, right-click in the Attached box and select Select All from the popup menu. Make sure the folder to which you want to export the contacts is open in Windows Explorer. Drag the selected attached .vcf files from the new email message to the open folder in Windows Explorer. A .vcf file is created for each contact you selected and dragged to the folder. You can close the Message window by clicking on the X in the upper, right corner of the window. NOTE: You can also close the Message window by clicking the File tab. Then, click the Close option on the left. Because you already have your .vcf files, you don’t need to save or send the message, so click No when asked if you want to save your changes. If it turns out that a draft of your message was saved, the following message displays. Click No to delete the draft. Export Multiple Contacts to a Single vCard (.vcf) File If you would rather export your contacts to a single vCard (.vcf) File, there is a way to do this using Gmail. We’ll export the contacts from Outlook as a .csv file and then use Gmail to convert the .csv file to a .vcf file. Select the contacts you want to export on the People page and click the File tab. On the Account Information screen, click Open & Export in the list on the left. On the Open screen, click Import/Export. The Import and Export Wizard displays. Select Export to a file from the Choose an action to perform list and click Next. In the Create a file of type box, select Comma Separated Values. Click Next. Contacts should be already selected in the Select folder to export from box. If not, select it. Click Next. Click Browse to the right of the Save exported file as box. Navigate to the folder to which you want to export the .csv file. Enter a name for the file in the File name edit box, keeping the .csv extension. The path you selected is entered into the Save exported file as edit box. Click Next. The final screen of the Export to a File dialog box displays listing the action to be performed. Click Finish to begin the export process. Once the export process is finished, you will see the .csv file in the folder in Windows Explorer. Now, we will import the .csv file into Gmail. Go to Gmail and sign in to your account. Click Gmal in the upper, left corner of the main page and select Contacts from the drop-down menu. On the Contacts page, click More above your list of contacts and select Import from the drop-down menu. Click Browse on the Import contacts dialog box that displays. Navigate to the folder in which you saved the .csv file and select the file. Click Open. Click Import on the Import contacts dialog box. A screen displays listing the contacts you imported, but not yet merged into your main Gmail contacts list. Select the contacts you imported. NOTE: The contacts you imported may be the only contacts in this list. If that’s the case, they all should be automatically selected. Click More and select Export from the drop-down menu. On the Export contacts dialog box, select Selected contacts to indicate which contacts you want to export. NOTE: We could have selected The group Imported 10/10/13 because that contains the same two contacts as the Selected contacts. Select vCard format for the export format. Click Export. Gmail creates a contacts.vcf file containing the selected contacts and asks you whether you want to open the file with Outlook or save the file. To save the file, select the Save File option and click OK. Navigate to the folder in which you want to save the contacts.vcf file, change the name of the file in the File name edit box, if desired, and click Save. The .vcf file is saved to the selected directory and contains all the contacts you exported from Outlook. This could be used as a way to backup your contacts in one file. You could also backup the .csv file. However, if you have a lot of contacts you will probably find that the .vcf file is smaller. We only exported two contacts, and our .csv file was 2 KB, while the .vcf file was 1 KB. We will be showing you how to import multiple contacts from a single .vcf file into Outlook soon.     

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >