Search Results

Search found 21008 results on 841 pages for 'chuzein part ii'.

Page 4/841 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • About Solaris 11 and UltraSPARC II/III/IV/IV+

    - by nospam(at)example.com (Joerg Moellenkamp)
    I know that I will get the usual amount of comments like "Oh, Jörg ? you can't be negative about Oracle" for this article. However as usual I want to explain the logic behind my reasoning. Yes ? I know that there is a lot of UltraSPARC III, IV and IV+ gear out there. But there are some very basic questions: Does your application you are currently running on this gear stops running just because you can't run Solaris 11 on it? What is the need to upgrade a system already in production to Solaris 11? I have the impression, that some people think that the systems get useless in the moment Oracle releases Solaris 11. I know that Sun sold UltraSPARC IV+ systems until 2009. The Sun SF490 introduced 2004 for example, that was a Sun SF480 with UltraSPARC IV and later with UltraSPARC IV+. And yes, Sun made some speedbumps. At that time the systems of the UltraSPARC III to IV+ generations were supported on Solaris 8, on Solaris 9 and on Solaris 10. However from my perspective we sold them to customers, which weren't able to migrate to Solaris 10 because they used applications not supported on Solaris 9 or who just didn't wanted to migrate to Solaris 10. Believe it or not ? I personally know two customers that migrated core systems to Solaris 10 in ? well 2008/9. This was especially true when the M3000 was announced in 2008 when it closed the darned single socket gap. It may be different at you site, however that's what I remember about that time when talking with customers. At first: Just because there is no Solaris 11 for UltraSPARC III, IV and IV+, it doesn't mean that Solaris 10 will go away anytime soon. I just want to point you to "Expect Lifetime Support - Hardware and Operating Systems". It states about Premier Support:Maintenance and software upgrades are included for Oracle operating systems and Oracle VM for a minimum of eight years from the general availability date.GA for Solaris 10 was in 2005. Plus 8 years ? 2013 ? at minimum. Then you can still opt for 3 years of "Extended Support" ? 2016 ? at minimum. 2016 your systems purchased in 2009 are 7 years old. Even on systems purchased at the very end of the lifetime of that system generation. That are the rules as written in the linked document. I said minimum The actual dates are even further in the future: Premier Support for Solaris 10 ends in 2015, Extended support ends 2018. Sustaining support ? indefinite. You will find this in the document "Oracle Lifetime Support Policy: Oracle Hardware and Operating Systems".So I don't understand when some people write, that Oracle is less protective about hardware investments than Sun. And for hardware it's the same as with Sun: Service 5 years after EOL as part of Premier Support. I would like to write about a different perspective as well: I have to be a little cautious here, because this is going in the roadmap area, so I will mention the public sources here: John Fowler told last year that we have to expect at at least 3x the single thread performance of T3 for T4. We have 8 cores in T4, as stated by Rick Hetherington. Let's assume for a moment that a T4 core will have the performance of a UltraSPARC core (just to simplify math and not to disclosing anything about the performance, all existing SPARC cores are considered equal). So given this pieces of information, you could consolidate 8 V215, 4 or 8 V245, 2 full blown V445,2 full blown 490, 2 full blown M3000 on a single T4 SPARC processor. The Fowler roadmap prezo talked about 4-socket systems with T4. So 32 V215, 16 to 8 V245, 8 fullblown V445, 8 full blown V490, 8 full blown M3000 in a system image. I think you get the idea. That said, most of the systems we are talking about have already amortized and perhaps it's just time to invest in new systems to yield other advantages like reduced space consumptions, like reduced power consumption, like some of the neat features sun4v gives you, and yes ? reduced number of processor licenses for Oracle and less money for Oracle HW/SW support. As much as I dislike it myself that my own UltraSPARC III and UltraSPARC II based systems won't run on Solaris 11 (and I have quite a few of them in my personal lab), I really think that the impact on production environments will be much less than most people think now. By the way: The reason for this move is a quite significant new feature. I will tell you that it was this feature, when it's out. I assume, telling just a word more could lead to much more time to blog.

    Read the article

  • .NET Weak Event Handlers – Part II

    - by João Angelo
    On the first part of this article I showed two possible ways to create weak event handlers. One using reflection and the other using a delegate. For this performance analysis we will further differentiate between creating a delegate by providing the type of the listener at compile time (Explicit Delegate) vs creating the delegate with the type of the listener being only obtained at runtime (Implicit Delegate). As expected, the performance between reflection/delegate differ significantly. With the reflection based approach, creating a weak event handler is just storing a MethodInfo reference while with the delegate based approach there is the need to create the delegate which will be invoked later. So, at creating the weak event handler reflection clearly wins, but what about when the handler is invoked. No surprises there, performing a call through reflection every time a handler is invoked is costly. In conclusion, if you want good performance when creating handlers that only sporadically get triggered use reflection, otherwise use the delegate based approach. The explicit delegate approach always wins against the implicit delegate, but I find the syntax for the latter much more intuitive. // Implicit delegate - The listener type is inferred at runtime from the handler parameter public static EventHandler WrapInDelegateCall(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs>(EventHandler<TArgs> handler) where TArgs : EventArgs; // Explicite delegate - TListener is the type that defines the handler public static EventHandler WrapInDelegateCall<TListener>(EventHandler handler); public static EventHandler<TArgs> WrapInDelegateCall<TArgs, TListener>(EventHandler<TArgs> handler) where TArgs : EventArgs;

    Read the article

  • Webcast Series Part II: Integrated Infrastructure and Lifecycle Solutions for Capital Assets - A New Delivery Model

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif";} Register today for the second part of this webcast series on Thursday, November 29, 2012 10:00 a.m. PT/ 1:00 p.m. ET Project Portfolio Management solutions have immediate and lasting impact o both Provider’s and Contractor’s bottom lines by helping to manage the costs and risks of healthcare infrastructure projects from planning through handing-over and operating. During this Webcast, Integrated Infrastructure and Lifecycle Solutions for Capital Assets - A New Delivery Model, Garrett Harley and Thomas Koulouris will continue their discussion on Healthcare Infrastructure strategy changes and will cover the following topics: The shift in the Healthcare infrastructure strategy and how it will impact providers and contractors The Integrated Infrastructure & Lifecycle Solutions for Capital Assets and how these solutions help your business Communication and integration between providers and contractors and why it is so important to your bottom line The new integrated delivery system in Healthcare infrastructure and how Project Portfolio Management is so critical to the success of that system.

    Read the article

  • Html 5 clock, part ii - CSS marker classes and getElementsByClassName

    - by Norgean
    The clock I made in part i displays the time in "long" - "It's a quarter to ten" (but in Norwegian). To save space, some letters are shared, "sevenineight" is four letters shorter than "seven nine eight". We only want to highlight the "correct" parts of this, for example "sevenineight". When I started programming the clock, each letter had its own unique ID, and my script would "get" each element individually, and highlight / hide each element according to some obscure logic. I quickly realized, despite being in a post surgery haze, …this is a stupid way to do it. And, to paraphrase NPH, if you find yourself doing something stupid, stop, and be awesome instead. We want an easy way to get all the items we want to highlight. Perhaps we can use the new getElementsByClassName function? Try to mark each element with a classname or two. So in "sevenineight": 's' is marked as 'h7', and the first 'n' is marked with both 'h7' and 'h9' (h for hour). <div class='h7 h9'>N</div><div class='h9'>I</div> getElementsByClassName('h9') will return the four letters of "nine". Notice that these classes are not backed by any CSS, they only appear directly in html (and are used in javascript). I have not seen classes used this way elsewhere, and have chosen to call them "marker classes" - similar to marker interfaces - until somebody comes up with a better name.

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • hide part of the content of an element

    - by user1843471
    Sorry, this may be kind of weird problem: I have an existing HTML code, which I can not directly edit or delete parts of it. The problem is: Inside a div-element in this code, there is some text which I want to hide. There are also another element inside of this div, which I don't want to hide. It looks something like this: <div> ....Text I want to hide.... <table> ... Text I don't want to hide...</table> </div> My question: Is it possible to hide the "....Text I want to hide...." while not hiding the "... Text I don't want to hide..."? (for example using javascript?)

    Read the article

  • SQL Azure Security: DoS Part II

    - by Herve Roggero
    Ah!  When you shoot yourself in the foot... a few times... it hurts! That's what I did on Sunday, to learn more about the behavior of the SQL Azure Denial Of Service prevention feature. This article is a short follow up to my last post on this feature. In this post, I will outline some of the lessons learned that were the result of testing the behavior of SQL Azure from two machines. From the standpoint of SQL Azure, they look like one machine since they are behind a NAT. All logins affected The first thing to note is that all the logins are affected. If you lock yourself out to a specific database, none of the logins will work on that database. In fact the database size becomes "--" in the SQL Azure Portal.   Less than 100 sessions I was able to see 50+ sessions being made in SQL Azure (by looking at sys.dm_exec_sessions) before being locked out. The the DoS feature appears to be triggered in part by the number of open sessions. I could not determine if the lockout is triggered by the speed at which connection requests are made however.   Other Databases Unaffected This was interesting... the DoS feature works at the database level. Other databases were available for me to use.   Just Wait Initially I thought that going through SQL Azure and connecting from there would reset the database and allow me to connect again. Unfortunately this doesn't seem to be the case. You will have to wait. And the more you lock yourself out, the more you will have to wait... The first time the database became available again within 30 seconds or so; the second time within 2-3 minutes and the third time... within 2-3 hours...   Successful Logins The DoS feature appears to engage only for valid logins. If you have a login failure, it doesn't seem to count. I ran a test with over 100 login failures without being locked.

    Read the article

  • SATA vs SATA-II

    - by Rayne
    Hi all, I'm looking to replace my hard disk, which is a Seagate Barracuda 7200.11 ST3500320AS 500GB. My friend told me to get a SATA HDD, not a SATA-II. What is the difference between the two? And is my old HDD a SATA or SATA-II? The new HDD I'm looking at is a Seagate 7200.12 ST3500418AS, which the store assistant told me is a SATA-II. However, the Seagate website labels both as SATA only. I'm afraid that I'll buy a HDD which is incompatible with my system, especially since I'm going to install Windows 7 on it and I previously had the problem of Windows (Vista) setup not recognizing my hard drive. Would the new HDD be compatible? Thank you. Regards, Rayne

    Read the article

  • Using the Data Form Web Part (SharePoint 2010) Site Agnostically!

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/24/154465.aspxAs a Developer whom has worked closely with web designers (Power users) in a SharePoint environment, I have come across the issue of making the Data Form Web Part reusable across the site collection! In SharePoint 2007 it was very easy and this blog pointed the way to make it happen: Josh Gaffey's Blog. In SharePoint 2010 something changed! This method failed except for using a Data Form Web Part that pointed to a list in the Site Collection Root! I am making this discussion relative to a developer whom creates a solution (WSP) with all the artifacts embedded and the user shouldn’t have any involvement in the process except to activate features. The Scenario: 1. A Power User creates a Data Form Web Part using SharePoint Designer 2010! It is a great web part the uses all the power of SharePoint Designer and XSLT (Conditional formatting, etc.). 2. Other Users in the site collection want to use that specific web part in sub sites in the site collection. Pointing to a list with the same name, not at the site collection root! The Issues: 1. The Data Form Web Part Data Source uses a List ID (GUID) to point to the specific list. Which means a list in a sub site will have a list with a new GUID different than the one which was created with SharePoint Designer! Obviously, the List needs to be the same List (Fields, Content Types, etc.) with different data. 2. How can we make this web part site agnostic, and dependent only on the lists Name? I had this problem come up over and over and decided to put my solution forward! The Solution: 1. Use the XSL of the Data Form Web Part Created By the Power User in SharePoint Designer! 2. Extend the OOTB Data Form Web Part to use this XSL and Point to a List by name. The solution points to a hybrid solution that requires some coding (Developer) and the XSL (Power User) artifacts put together in a Visual Studio SharePoint Solution. Here are the solution steps in summary: 1. Create an empty SharePoint project in Visual Studio 2. Create a Module and Feature and put the XSL file created by the Power User into it a. Scope the feature to web 3. Create a Feature Receiver to Create the List. The same list from which the Data Form Web Part was created with by the Power User. a. Scope the feature to web 4. Create a Web Part extending the Data Form Web a. Point the Data Form Web Part to point to the List by Name b. Point the Data Form Web Part XSL link to the XSL added using the Module feature c. Scope The feature to Site i. This is because all web parts are in the site collection web part gallery. So in a Narrative Summary: We are creating a list in code which has the same name and (site Columns) as the list from which the Power User created the Data Form Web Part Using SharePoint Designer. We are creating a Web Part in code which extends the OOTB Data Form Web Part to point to a list by name and use the XSL created by the Power User. Okay! Here are the steps with images and code! At the end of this post I will provide a link to the code for a solution which works in any site! I want to TOOT the HORN for the power of this solution! It is the mantra a use with all my clients! What is a basic skill a SharePoint Developer: Create an application that uses the data from a SharePoint list and make that data visible to the user in a manner which meets requirements! Create an Empty SharePoint 2010 Project Here I am naming my Project DJ.DataFormWebPart Create a Code Folder Copy and paste the Extension and Utilities classes (Found in the solution provided at the end of this post) Change the Namespace to match this project The List to which the Data Form Web Part which was used to make the XSL by the Power User in SharePoint Designer is now going to be created in code! If already in code, then all the better! Here I am going to create a list in the site collection root and add some data to it! For the purpose of this discussion I will actually create this list in code before using SharePoint Designer for simplicity! So here I create the List and deploy it within this solution before I do anything else. I will use a List I created before for demo purposes. Footer List is used within the footer of my master page. Add a new Feature: Here I name the Feature FooterList and add a Feature Event Receiver: Here is the code for the Event Receiver: I have a previous blog post about adding lists in code so I will not take time to narrate this code: using System; using System.Runtime.InteropServices; using System.Security.Permissions; using Microsoft.SharePoint; using DJ.DataFormWebPart.Code; namespace DJ.DataFormWebPart.Features.FooterList { /// <summary> /// This class handles events raised during feature activation, deactivation, installation, uninstallation, and upgrade. /// </summary> /// <remarks> /// The GUID attached to this class may be used during packaging and should not be modified. /// </remarks> [Guid("a58644fd-9209-41f4-aa16-67a53af7a9bf")] public class FooterListEventReceiver : SPFeatureReceiver { SPWeb currentWeb = null; SPSite currentSite = null; const string columnGroup = "DJ"; const string ctName = "FooterContentType"; // Uncomment the method below to handle the event raised after a feature has been activated. public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb spWeb = properties.GetWeb() as SPWeb) { using (SPSite site = new SPSite(spWeb.Site.ID)) { using (SPWeb rootWeb = site.OpenWeb(site.RootWeb.ID)) { //add the fields addFields(rootWeb); //add content type SPContentType testCT = rootWeb.ContentTypes[ctName]; // we will not create the content type if it exists if (testCT == null) { //the content type does not exist add it addContentType(rootWeb, ctName); } if ((spWeb.Lists.TryGetList("FooterList") == null)) { //create the list if it dosen't to exist CreateFooterList(spWeb, site); } } } } } #region ContentType public void addFields(SPWeb spWeb) { Utilities.addField(spWeb, "Link", SPFieldType.URL, false, columnGroup); Utilities.addField(spWeb, "Information", SPFieldType.Text, false, columnGroup); } private static void addContentType(SPWeb spWeb, string name) { SPContentType myContentType = new SPContentType(spWeb.ContentTypes["Item"], spWeb.ContentTypes, name) { Group = columnGroup }; spWeb.ContentTypes.Add(myContentType); addContentTypeLinkages(spWeb, myContentType); myContentType.Update(); } public static void addContentTypeLinkages(SPWeb spWeb, SPContentType ct) { Utilities.addContentTypeLink(spWeb, "Link", ct); Utilities.addContentTypeLink(spWeb, "Information", ct); } private void CreateFooterList(SPWeb web, SPSite site) { Guid newListGuid = web.Lists.Add("FooterList", "Footer List", SPListTemplateType.GenericList); SPList newList = web.Lists[newListGuid]; newList.ContentTypesEnabled = true; var footer = site.RootWeb.ContentTypes[ctName]; newList.ContentTypes.Add(footer); newList.ContentTypes.Delete(newList.ContentTypes["Item"].Id); newList.Update(); var view = newList.DefaultView; //add all view fields here //view.ViewFields.Add("NewsTitle"); view.ViewFields.Add("Link"); view.ViewFields.Add("Information"); view.Update(); } } } Basically created a content type with two site columns Link and Information. I had to change some code as we are working at the SPWeb level and need Content Types at the SPSite level! I’ll use a new Site Collection for this demo (Best Practice) keep old artifacts from impinging on development: Next we will add this list to the root of the site collection by deploying this solution, add some data and then use SharePoint Designer to create a Data Form Web Part. The list has been added, now let’s add some data: Okay let’s add a Data Form Web Part in SharePoint Designer. Create a new web part page in the site pages library: I will name it TestWP.aspx and edit it in advanced mode: Let’s add an empty Data Form Web Part to the web part zone: Click on the web part to add a data source: Choose FooterList in the Data Source menu: Choose appropriate fields and select insert as multiple item view: Here is what it look like after insertion: Let’s add some conditional formatting if the information filed is not blank: Choose Create (right side) apply formatting: Choose the Information Field and set the condition not null: Click Set Style: Here is the result: Okay! Not flashy but simple enough for this demo. Remember this is the job of the Power user! All we want from this web part is the XLS-Style Sheet out of SharePoint Designer. We are going to use it as the XSL for our web part which we will be creating next. Let’s add a web part to our project extending the OOTB Data Form Web Part. Add new item from the Visual Studio add menu: Choose Web Part: Change WebPart to DataFormWebPart (Oh well my namespace needs some improvement, but it will sure make it readily identifiable as an extended web part!) Below is the code for this web part: using System; using System.ComponentModel; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using System.Text; namespace DJ.DataFormWebPart.DataFormWebPart { [ToolboxItemAttribute(false)] public class DataFormWebPart : Microsoft.SharePoint.WebPartPages.DataFormWebPart { protected override void OnInit(EventArgs e) { base.OnInit(e); this.ChromeType = PartChromeType.None; this.Title = "FooterListDF"; try { //SPSite site = SPContext.Current.Site; SPWeb web = SPContext.Current.Web; SPList list = web.Lists.TryGetList("FooterList"); if (list != null) { string queryList1 = "<Query><Where><IsNotNull><FieldRef Name='Title' /></IsNotNull></Where><OrderBy><FieldRef Name='Title' Ascending='True' /></OrderBy></Query>"; uint maximumRowList1 = 10; SPDataSource dataSourceList1 = GetDataSource(list.Title, web.Url, list, queryList1, maximumRowList1); this.DataSources.Add(dataSourceList1); this.XslLink = web.Url + "/Assests/Footer.xsl"; this.ParameterBindings = BuildDataFormParameters(); this.DataBind(); } } catch (Exception ex) { this.Controls.Add(new LiteralControl("ERROR: " + ex.Message)); } } private SPDataSource GetDataSource(string dataSourceId, string webUrl, SPList list, string query, uint maximumRow) { SPDataSource dataSource = new SPDataSource(); dataSource.UseInternalName = true; dataSource.ID = dataSourceId; dataSource.DataSourceMode = SPDataSourceMode.List; dataSource.List = list; dataSource.SelectCommand = "" + query + ""; Parameter listIdParam = new Parameter("ListID"); listIdParam.DefaultValue = list.ID.ToString( "B").ToUpper(); Parameter maximumRowsParam = new Parameter("MaximumRows"); maximumRowsParam.DefaultValue = maximumRow.ToString(); QueryStringParameter rootFolderParam = new QueryStringParameter("RootFolder", "RootFolder"); dataSource.SelectParameters.Add(listIdParam); dataSource.SelectParameters.Add(maximumRowsParam); dataSource.SelectParameters.Add(rootFolderParam); dataSource.UpdateParameters.Add(listIdParam); dataSource.DeleteParameters.Add(listIdParam); dataSource.InsertParameters.Add(listIdParam); return dataSource; } private string BuildDataFormParameters() { StringBuilder parameters = new StringBuilder("<ParameterBindings><ParameterBinding Name=\"dvt_apos\" Location=\"Postback;Connection\"/><ParameterBinding Name=\"UserID\" Location=\"CAMLVariable\" DefaultValue=\"CurrentUserName\"/><ParameterBinding Name=\"Today\" Location=\"CAMLVariable\" DefaultValue=\"CurrentDate\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_firstrow\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_nextpagedata\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocmode\" Location=\"Postback;Connection\"/>"); parameters.Append("<ParameterBinding Name=\"dvt_adhocfiltermode\" Location=\"Postback;Connection\"/>"); parameters.Append("</ParameterBindings>"); return parameters.ToString(); } } } The OnInit method we use to set the list name and the XSL Link property of the Data Form Web Part. We do not have the link to XSL in our Solution so we will add the XSL now: Add a Module in the Visual Studio add menu: Rename Sample.txt in the module to footer.xsl and then copy the XSL from SharePoint Designer Look at elements.xml to where the footer.xsl is being provisioned to which is Assets/footer.xsl, make sure the Web parts xsl link is pointing to this url: Okay we are good to go! Let’s check our features and package: DataFormWebPart should be scoped to site and have the web part: The Footer List feature should be scoped to web and have the Assets module (Okay, I see, a spelling issue but it won’t affect this demo) If everything is correct we should be able to click a couple of sub site feature activations and have our list and web part in a sub site. (In fact this solution can be activated anywhere) Here is the list created at SubSite1 with new data It. Next let’s add the web part on a test page and see if it works as expected: It does! So we now have a repeatable way to use a WSP to move a Data Form Web Part around our sites! Here is a link to the code: DataFormWebPart Solution

    Read the article

  • Receiving messages part by part from serial port using c#

    - by karthik
    I am using the below code to receive the messages from serial port using c# void comPort_DataReceived(object sender, SerialDataReceivedEventArgs e) { if (comPort.IsOpen == true) { string msg = comPort.ReadExisting(); MessageBox.Show(msg.Trim()); } } The problem is, i am getting the messages part by part. Like, if u send "Hello, How are you" I am receiving it word by word. I want that in a single stretch. How can i do ?? Also, is it possible to retrieve the port name from which the application is sending and receiving messages ?

    Read the article

  • Developing custom MBeans to manage J2EE Applications (Part III)

    - by philippe Le Mouel
    This is the third and final part in a series of blogs, that demonstrate how to add management capability to your own application using JMX MBeans. In Part I we saw: How to implement a custom MBean to manage configuration associated with an application. How to package the resulting code and configuration as part of the application's ear file. How to register MBeans upon application startup, and unregistered them upon application stop (or undeployment). How to use generic JMX clients such as JConsole to browse and edit our application's MBean. In Part II we saw: How to add localized descriptions to our MBean, MBean attributes, MBean operations and MBean operation parameters. How to specify meaningful name to our MBean operation parameters. We also touched on future enhancements that will simplify how we can implement localized MBeans. In this third and last part, we will re-write our MBean to simplify how we added localized descriptions. To do so we will take advantage of the functionality we already described in part II and that is now part of WebLogic 10.3.3.0. We will show how to take advantage of WebLogic's localization support to localize our MBeans based on the client's Locale independently of the server's Locale. Each client will see MBean descriptions localized based on his/her own Locale. We will show how to achieve this using JConsole, and also using a sample programmatic JMX Java client. The complete code sample and associated build files for part III are available as a zip file. The code has been tested against WebLogic Server 10.3.3.0 and JDK6. To build and deploy our sample application, please follow the instruction provided in Part I, as they also apply to part III's code and associated zip file. Providing custom descriptions take II In part II we localized our MBean descriptions by extending the StandardMBean class and overriding its many getDescription methods. WebLogic 10.3.3.0 similarly to JDK 7 can automatically localize MBean descriptions as long as those are specified according to the following conventions: Descriptions resource bundle keys are named according to: MBean description: <MBeanInterfaceClass>.mbean MBean attribute description: <MBeanInterfaceClass>.attribute.<AttributeName> MBean operation description: <MBeanInterfaceClass>.operation.<OperationName> MBean operation parameter description: <MBeanInterfaceClass>.operation.<OperationName>.<ParameterName> MBean constructor description: <MBeanInterfaceClass>.constructor.<ConstructorName> MBean constructor parameter description: <MBeanInterfaceClass>.constructor.<ConstructorName>.<ParameterName> We also purposely named our resource bundle class MBeanDescriptions and included it as part of the same package as our MBean. We already followed the above conventions when creating our resource bundle in part II, and our default resource bundle class with English descriptions looks like: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "MBean used to manage persistent application properties"}, {"PropertyConfigMXBean.attribute.Properties", "Properties associated with the running application"}, {"PropertyConfigMXBean.operation.setProperty", "Create a new property, or change the value of an existing property"}, {"PropertyConfigMXBean.operation.setProperty.key", "Name that identify the property to set."}, {"PropertyConfigMXBean.operation.setProperty.value", "Value for the property being set"}, {"PropertyConfigMXBean.operation.getProperty", "Get the value for an existing property"}, {"PropertyConfigMXBean.operation.getProperty.key", "Name that identify the property to be retrieved"} }; } } We have now also added a resource bundle with French localized descriptions: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions_fr extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "Manage proprietes sauvegarde dans un fichier disque."}, {"PropertyConfigMXBean.attribute.Properties", "Proprietes associee avec l'application en cour d'execution"}, {"PropertyConfigMXBean.operation.setProperty", "Construit une nouvelle proprietee, ou change la valeur d'une proprietee existante."}, {"PropertyConfigMXBean.operation.setProperty.key", "Nom de la propriete dont la valeur est change."}, {"PropertyConfigMXBean.operation.setProperty.value", "Nouvelle valeur"}, {"PropertyConfigMXBean.operation.getProperty", "Retourne la valeur d'une propriete existante."}, {"PropertyConfigMXBean.operation.getProperty.key", "Nom de la propriete a retrouver."} }; } } So now we can just remove the many getDescriptions methods from our MBean code, and have a much cleaner: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Map; import java.util.HashMap; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig extends StandardMBean implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; private static Map operationsParamNames_ = null; static { operationsParamNames_ = new HashMap(); operationsParamNames_.put("setProperty", new String[] {"key", "value"}); operationsParamNames_.put("getProperty", new String[] {"key"}); } public PropertyConfig(String relativePath) throws Exception { super(PropertyConfigMXBean.class , true); props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} protected String getParameterName(MBeanOperationInfo op, MBeanParameterInfo param, int sequence) { return operationsParamNames_.get(op.getName())[sequence]; } } The only reason we are still extending the StandardMBean class, is to override the default values for our operations parameters name. If this isn't a concern, then one could just write the following code: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; public PropertyConfig(String relativePath) throws Exception { props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} } Note: The above would also require changing the operations parameters name in the resource bundle classes. For instance: PropertyConfigMXBean.operation.setProperty.key would become: PropertyConfigMXBean.operation.setProperty.p0 Client based localization When accessing our MBean using JConsole started with the following command line: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -debug We see that our MBean descriptions are localized according to the WebLogic's server Locale. English in this case: Note: Consult Part I for information on how to use JConsole to browse/edit our MBean. Now if we specify the client's Locale as part of the JConsole command line as follow: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -J-Dweblogic.management.remote.locale=fr-FR -debug We see that our MBean descriptions are now localized according to the specified client's Locale. French in this case: We use the weblogic.management.remote.locale system property to specify the Locale that should be associated with the cient's JMX connections. The value is composed of the client's language code and its country code separated by the - character. The country code is not required, and can be omitted. For instance: -Dweblogic.management.remote.locale=fr We can also specify the client's Locale using a programmatic client as demonstrated below: package blog.wls.jmx.appmbean.client; import javax.management.MBeanServerConnection; import javax.management.ObjectName; import javax.management.MBeanInfo; import javax.management.remote.JMXConnector; import javax.management.remote.JMXServiceURL; import javax.management.remote.JMXConnectorFactory; import java.util.Hashtable; import java.util.Set; import java.util.Locale; public class JMXClient { public static void main(String[] args) throws Exception { JMXConnector jmxCon = null; try { JMXServiceURL serviceUrl = new JMXServiceURL( "service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime"); System.out.println("Connecting to: " + serviceUrl); // properties associated with the connection Hashtable env = new Hashtable(); env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); String[] credentials = new String[2]; credentials[0] = "weblogic"; credentials[1] = "weblogic"; env.put(JMXConnector.CREDENTIALS, credentials); // specifies the client's Locale env.put("weblogic.management.remote.locale", Locale.FRENCH); jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env); jmxCon.connect(); MBeanServerConnection con = jmxCon.getMBeanServerConnection(); Set mbeans = con.queryNames( new ObjectName( "blog.wls.jmx.appmbean:name=myAppProperties,type=PropertyConfig,*"), null); for (ObjectName mbeanName : mbeans) { System.out.println("\n\nMBEAN: " + mbeanName); MBeanInfo minfo = con.getMBeanInfo(mbeanName); System.out.println("MBean Description: "+minfo.getDescription()); System.out.println("\n"); } } finally { // release the connection if (jmxCon != null) jmxCon.close(); } } } The above client code is part of the zip file associated with this blog, and can be run using the provided client.sh script. The resulting output is shown below: $ ./client.sh Connecting to: service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime MBEAN: blog.wls.jmx.appmbean:type=PropertyConfig,name=myAppProperties MBean Description: Manage proprietes sauvegarde dans un fichier disque. $ Miscellaneous Using Description annotation to specify MBean descriptions Earlier we have seen how to name our MBean descriptions resource keys, so that WebLogic 10.3.3.0 automatically uses them to localize our MBean. In some cases we might want to implicitly specify the resource key, and resource bundle. For instance when operations are overloaded, and the operation name is no longer sufficient to uniquely identify a single operation. In this case we can use the Description annotation provided by WebLogic as follow: import weblogic.management.utils.Description; @Description(resourceKey="myapp.resources.TestMXBean.description", resourceBundleBaseName="myapp.resources.MBeanResources") public interface TestMXBean { @Description(resourceKey="myapp.resources.TestMXBean.threshold.description", resourceBundleBaseName="myapp.resources.MBeanResources" ) public int getthreshold(); @Description(resourceKey="myapp.resources.TestMXBean.reset.description", resourceBundleBaseName="myapp.resources.MBeanResources") public int reset( @Description(resourceKey="myapp.resources.TestMXBean.reset.id.description", resourceBundleBaseName="myapp.resources.MBeanResources", displayNameKey= "myapp.resources.TestMXBean.reset.id.displayName.description") int id); } The Description annotation should be applied to the MBean interface. It can be used to specify MBean, MBean attributes, MBean operations, and MBean operation parameters descriptions as demonstrated above. Retrieving the Locale associated with a JMX operation from the MBean code There are several cases where it is necessary to retrieve the Locale associated with a JMX call from the MBean implementation. For instance this can be useful when localizing exception messages. This can be done as follow: import weblogic.management.mbeanservers.JMXContextUtil; ...... // some MBean method implementation public String setProperty(String key, String value) throws IOException { Locale callersLocale = JMXContextUtil.getLocale(); // use callersLocale to localize Exception messages or // potentially some return values such a Date .... } Conclusion With this last part we conclude our three part series on how to write MBeans to manage J2EE applications. We are far from having exhausted this particular topic, but we have gone a long way and are now capable to take advantage of the latest functionality provided by WebLogic's application server to write user friendly MBeans.

    Read the article

  • Programação paralela no .NET Framework 4 – Parte II

    - by anobre
    Olá pessoal, tudo bem? Este post é uma continuação da série iniciada neste outro post, sobre programação paralela. Meu objetivo hoje é apresentar o PLINQ, algo que poderá ser utilizado imediatamente nos projetos de vocês. Parallel LINQ (PLINQ) PLINQ nada mais é que uma implementação de programação paralela ao nosso famoso LINQ, através de métodos de extensão. O LINQ foi lançado com a versão 3.0 na plataforma .NET, apresentando uma maneira muito mais fácil e segura de manipular coleções IEnumerable ou IEnumerable<T>. O que veremos hoje é a “alteração” do LINQ to Objects, que é direcionado a coleções de objetos em memória. A principal diferença entre o LINQ to Objects “normal” e o paralelo é que na segunda opção o processamento é realizado tentando utilizar todos os recursos disponíveis para tal, obtendo uma melhora significante de performance. CUIDADO: Nem todas as operações ficam mais rápidas utilizando recursos de paralelismo. Não deixe de ler a seção “Performance” abaixo. ParallelEnumerable Tudo que a gente precisa para este post está organizado na classe ParallelEnumerable. Esta classe contém os métodos que iremos utilizar neste post, e muito mais: AsParallel AsSequential AsOrdered AsUnordered WithCancellation WithDegreeOfParallelism WithMergeOptions WithExecutionMode ForAll … O exemplo mais básico de como executar um código PLINQ é utilizando o métodos AsParallel, como o exemplo: var source = Enumerable.Range(1, 10000); var evenNums = from num in source.AsParallel() where Compute(num) > 0 select num; Algo tão interessante quanto esta facilidade é que o PLINQ não executa sempre de forma paralela. Dependendo da situação e da análise de alguns itens no cenário de execução, talvez seja mais adequado executar o código de forma sequencial – e nativamente o próprio PLINQ faz esta escolha.  É possível forçar a execução para sempre utilizar o paralelismo, caso seja necessário. Utilize o método WithExecutionMode no seu código PLINQ. Um teste muito simples onde podemos visualizar a diferença é demonstrado abaixo: static void Main(string[] args) { IEnumerable<int> numbers = Enumerable.Range(1, 1000); IEnumerable<int> results = from n in numbers.AsParallel() where IsDivisibleByFive(n) select n; Stopwatch sw = Stopwatch.StartNew(); IList<int> resultsList = results.ToList(); Console.WriteLine("{0} itens", resultsList.Count()); sw.Stop(); Console.WriteLine("Tempo de execução: {0} ms", sw.ElapsedMilliseconds); Console.WriteLine("Fim..."); Console.ReadKey(true); } static bool IsDivisibleByFive(int i) { Thread.SpinWait(2000000); return i % 5 == 0; }   Basta remover o AsParallel da instrução LINQ que você terá uma noção prática da diferença de performance. 1. Instrução utilizando AsParallel   2. Instrução sem utilizar paralelismo Performance Apesar de todos os benefícios, não podemos utilizar PLINQ sem conhecer todos os seus detalhes. Lembre-se de fazer as perguntas básicas: Eu tenho trabalho suficiente que justifique utilizar paralelismo? Mesmo com o overhead do PLINQ, vamos ter algum benefício? Por este motivo, visite este link e conheça todos os aspectos, antes de utilizar os recursos disponíveis. Conclusão Utilizar recursos de paralelismo é ótimo, aumenta a performance, utiliza o investimento realizado em hardware – tudo isso sem custo de produtividade. Porém, não podemos usufruir de qualquer tipo de tecnologia sem conhece-la a fundo antes. Portanto, faça bom uso, mas não esqueça de manter o conhecimento a frente da empolgação. Abraços.

    Read the article

  • Sorting a ListView in WPF – Part II

    - by marianor
    Some time ago I wrote a post about how to sort a ListView by clicking on the header of the column. The problem with that solution was that you needed to implement it each time and you have to define an explicit header for each column. As a more general solution I use attached properties to extend the ListView and GridViewColumn . The first attached property is tied to the ListView itself, and it indicates that the control supports sorting. This property attach or detach to the Click event of the...(read more)

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (II)

    - by ccasares
    Retrieving uploaded attachments -UCM- As stated in my previous blog entry, Oracle BPM 11g 11.1.1.5.1 (aka PS4FP) introduced a new cool feature whereby you can use Oracle WebCenter Content (previously known as Oracle UCM) as the repository for the human task attached documents. For more information about how to use or enable this feature, have a look here. The attachment scope (either TASK or PROCESS) also applies to UCM-attachments. But even with this other feature, one question might arise when using UCM attachments. How can I get them from within the process? The first answer would be to use the same getTaskAttachmentContents() XPath function already explained in my previous blog entry. In fact, that's the way it should be. But in Oracle BPM 11g 11.1.1.5.1 (PS4FP) and 11.1.1.6.0 (PS5) there's a bug that prevents you to do that. If you invoke such function against a UCM-attachment, you'll get a null content response (bug#13907552). Even if the attachment was correctly uploaded. While this bug gets fixed, next I will show a workaround that lets me to retrieve the UCM-attached documents from within a BPM process. Besides, the sample will show how to interact with WCC API from within a BPM process.Aside note: I suggest you to read my previous blog entry about Human Task attachments where I briefly describe some concepts that are used next, such as the execData/attachment[] structure. Sample Process I will be using the following sample process: A dummy UserTask using "HumanTask2" Human Task, followed by an Embedded Subprocess that will retrieve the attachments payload. In this case, and here's the key point of the sample, we will retrieve such payload using WebCenter Content WebService API (IDC): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail:  We will use the same attachmentCollection XSD structure and same BusinessObject definition as in the previous blog entry. However we create a separate variable, named attachmentUCM, based on such BusinessObject. We will still need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a new variable of type TaskExecutionData (different one than the other used for non-UCM attachments): As in the non-UCM attachments flow, in the output tab of the UserTask mapping, we'll keep a copy of the execData structure: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentUCM variable with the following information: The name of each attachment (from execData/attachment/name element) The WebCenter Content ID of the uploaded attachment. This info is stored in execData/attachment/URI element with the format ecm://<id>. As we just want the numeric <id>, we need to get rid of the protocol prefix ("ecm://"). We do so with some XPath functions as detailed below: with these two functions being invoked, respectively: We, again, set the target payload element with an empty string, to get the <payload></payload> tag created. The complete XSLT transformation is shown below. Remember that we're using the XSLT for-each node to create as many target structures as necessary.  Once we have fed the attachmentsUCM structure and so it now contains the name of each of the attachments along with each WCC unique id (dID), it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsUCM/attachment[] element: In each iteration we will use a Service activity that invokes WCC API through a WebService. Follow these steps to create and configure the Partner Link needed: Login to WCC console with an administrator user (i.e. weblogic). Go to Administration menu and click on "Soap Wsdls" link. We will use the GetFile service to retrieve a file based on its dID. Thus we'll need such service WSDL definition that can be downloaded by clicking the GetFile link. Save the WSDL file in your JDev project folder. In the BPM project's composite view, drag & drop a WebService adapter to create a new External Reference, based on the just added GetFile.wsdl. Name it UCM_GetFile. WCC services are secured through basic HTTP authentication. Therefore we need to enable the just created reference for that: Right-click the reference and click on Configure WS Policies. Under the Security section, click "+" to add the "oracle/wss_username_token_client_policy" policy The last step is to set the credentials for the security policy. For the sample we will use the admin user for WCC (weblogic/welcome1). Open the composite.xml file and select the Source view. Search for the UCM_GetFile entry and add the following highlighted elements into it:   <reference name="UCM_GetFile" ui:wsdlLocation="GetFile.wsdl">     <interface.wsdl interface="http://www.stellent.com/GetFile/#wsdl.interface(GetFileSoap)"/>     <binding.ws port="http://www.stellent.com/GetFile/#wsdl.endpoint(GetFile/GetFileSoap)"                 location="GetFile.wsdl" soapVersion="1.1">       <wsp:PolicyReference URI="oracle/wss_username_token_client_policy"                            orawsp:category="security" orawsp:status="enabled"/>       <property name="weblogic.wsee.wsat.transaction.flowOption"                 type="xs:string" many="false">WSDLDriven</property>       <property name="oracle.webservices.auth.username"                 type="xs:string">weblogic</property>       <property name="oracle.webservices.auth.password"                 type="xs:string">welcome1</property>     </binding.ws>   </reference> Now the new external reference is ready: Once the reference has just been created, we should be able now to use it from our BPM process. However we find here a problem. The WCC GetFile service operation that we will use, GetFileByID, accepts as input a structure similar to this one, where all element tags are optional: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">    <get:dID>?</get:dID>   <get:rendition>?</get:rendition>   <get:extraProps>      <get:property>         <get:name>?</get:name>         <get:value>?</get:value>      </get:property>   </get:extraProps></get:GetFileByID> and we need to fill up just the <get:dID> tag element. Due to some kind of restriction or bug on WCC, the rest of the tag elements must NOT be sent, not even empty (i.e.: <get:rendition></get:rendition> or <get:rendition/>). A sample request that performs the query just by the dID, must be in the following format: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">   <get:dID>12345</get:dID></get:GetFileByID> The issue here is that the simple mapping in BPM does create empty tags being a sample result as follows: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/"> <get:dID>12345</get:dID> <get:rendition/> <get:extraProps/> </get:GetFileByID> Although the above structure is perfectly valid, it is not accepted by WCC. Therefore, we need to bypass the problem. The workaround we use (many others are available) is to add a Mediator component between the BPM process and the Service that simply copies the input structure from BPM but getting rid of the empty tags. Follow these steps to configure the Mediator: Drag & drop a new Mediator component into the composite. Uncheck the creation of the SOAP bindings and use the Interface Definition from WSDL template and select the existing GetFile.wsdl Double click in the mediator to edit it. Add a static routing rule to the GetFileByID operation, of type Service and select References/UCM_GetFile/GetFileByID target service: Create the request and reply XSLT mappers: Make sure you map only the dID element in the request: And do an Auto-mapper for the whole response: Finally, we can now add and configure the Service activity in the BPM process. Drag & drop it to the embedded subprocess and select the NormalizedGetFile service and getFileByID operation: Map both the input: ...and the output: Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsUCM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsUCM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target): Second, we must set the target filename using the Service Properties dialog box: Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Final blog entry about attachments will handle how to inject documents to Human Tasks from the BPM process and how to share attachments between different User Tasks. Will come soon. Again, once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • Management Reporter Installation – Lessons Learned Part II - Dynamics GP

    - by Ryan McBee
    After feeling pretty good about my deployment skills of Management Reporter for Dynamics GP a few weeks ago, I ran into two additional lessons learned that I wanted to share. First, on another new deployment, I got the error shown below which says “An error occurred while creating the database.  View the installation log for additional information.”  This problem initially pointed me to KB 2406948 which did not provide resolution. After several hours of troubleshooting, I found there is an issue if the defaults database locations in SQL Server are set to the root of a drive. You will want to set the default to something like the following to get it installed; C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA.  My default database locations for the data and log files were indeed sitting on the H:\ and I:\ drives. To change this property in your SQL Server Instance you need to open SQL Server Management Studio, right click on the server, and choose properties and then database settings. When I initially got the error, I briefly considered creating the ManagementReporter database by hand, but experience tells me that would have created more headaches down the road. The second problem I ran into with this particular deployment of Management Reporter happened when I started the FRx conversion utility.  The errors reads “The ‘Microsoft.ACE.OLEDB.12.0’ provider is not registered on the local machine. I had a suspicion that this error was related to the fact FRx uses outdated technology and I happened to be on a new install of Server 2008 R2.  A knowledge base search quickly pointed me to KB 2102486. The resolution for this Management Reporter issue was to install the Microsoft Access Database Engine Redistributable, by following the site below. http://www.microsoft.com/downloads/details.aspx?familyid=C06B8369-60DD-4B64-A44B-84B371EDE16D&displaylang=en

    Read the article

  • Orchestrating the Virtual Enterprise, Part II

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's CSO & Vice President, SCM Product Strategy Almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value. Jon ChorleyChief Sustainability Officer & Vice President, SCM Product StrategyOracle Corporation

    Read the article

  • Server Migration Checklist II

    - by merrillaldrich
    Easy Breezy Login Audit for your Ol’ 2000 Server In the last post on this topic I put up the preparatory steps I’ve been using for server migrations. Here I am posting some code that has worked well for us to trace who/what is connecting to our older SQL Server 2000 machines. It’s a simple audit of login events, tracing the login name, host name, database, and last login time for connections to the server, and gave us valuable insight into who was really using the machines and which databases might...(read more)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >