Search Results

Search found 4833 results on 194 pages for 'component'.

Page 45/194 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Windsor + NHibernate + ISession + MVC

    - by dbones
    Hi I am trying to get Windsor to give me an instance ISession for each request, which should be injected into all the repositories Here is my container setup container.AddFacility<FactorySupportFacility>().Register( Component.For<ISessionFactory>().Instance(NHibernateHelper.GetSessionFactory()).LifeStyle.Singleton, Component.For<ISession>().LifeStyle.Transient .UsingFactoryMethod(kernel => kernel.Resolve<ISessionFactory>().OpenSession()) ); //add to the container container.Register( Component.For<IActionInvoker>().ImplementedBy<WindsorActionInvoker>(), Component.For(typeof(IRepository<>)).ImplementedBy(typeof(NHibernateRepository<>)) ); Its based upon a StructureMap post here http://www.kevinwilliampang.com/2010/04/06/setting-up-asp-net-mvc-with-fluent-nhibernate-and-structuremap/ however, when this is run, a new Session is created for every object it is injected too. what am I missing? thanks in advanced (FYI the NHibernateHelper, sets up the config for Nhib)

    Read the article

  • How should nested components interact with model in a GUI application?

    - by fig-gnuton
    Broad design/architecture question. If you have nested components in a GUI, what's the most common way for those components to interact with data? For example, let's say a component receives a click on one of its buttons to save data. Should the save request be delegated up that component's ancestors, with the uppermost ancestor ultimately passing the request to a controller? Or are models/datastores in a GUI application typically singletons, so that a component at any level of a hierarchy can directly get/set data? Or is a controller injected as a dependency down the hierarchy of components, so that any given component is only one intermediary away from the datastore/model?

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • Websphere federated repository for Active Directory

    - by Drakiula
    Hi, What I am trying to achieve is to have Websphere 6.1 use Active Directory users authentication. Websphere is running on Windows 2008 R2. What I've done already: Succesfully setup a federated repository for Windows Active Directory (LDAP); Create a realm definition for the federated repository previously defined; Set the realm definition as the current real definition. Stop the Websphere service. When I attempt to start the Websphere service again, it crashes with the following stacktrace: ------Start of DE processing------ = [9/3/10 2:36:14:133 PDT] , key = com.ibm.websphere.security.EntryNotFoundException com.ibm.ws.security.registry.UserRegistryImpl.createCredential 824 Exception = com.ibm.websphere.security.EntryNotFoundException Source = com.ibm.ws.security.registry.UserRegistryImpl.createCredential probeid = 824 Stack Dump = com.ibm.websphere.wim.exception.EntityNotFoundException: CWWIM4001E The 'null' entity was not found. at com.ibm.ws.wim.registry.util.UniqueIdBridge.getUniqueUserId(UniqueIdBridge.java:233) at com.ibm.ws.wim.registry.WIMUserRegistry$6.run(WIMUserRegistry.java:351) at com.ibm.ws.wim.security.authz.jacc.JACCSecurityManager.runAsSuperUser(JACCSecurityManager.java:500) at com.ibm.ws.wim.security.authz.ProfileSecurityManager.runAsSuperUser(ProfileSecurityManager.java:964) at com.ibm.ws.wim.registry.WIMUserRegistry.getUniqueUserId(WIMUserRegistry.java:340) at com.ibm.ws.security.registry.UserRegistryImpl.createCredential(UserRegistryImpl.java:750) at com.ibm.ws.security.ltpa.LTPAServerObject.authenticate(LTPAServerObject.java:776) at com.ibm.ws.security.server.lm.ltpaLoginModule.login(ltpaLoginModule.java:453) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:795) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:209) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:709) at java.security.AccessController.doPrivileged(AccessController.java:246) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:706) at javax.security.auth.login.LoginContext.login(LoginContext.java:603) at com.ibm.ws.security.auth.JaasLoginHelper.jaas_login(JaasLoginHelper.java:376) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3513) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3306) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3086) at com.ibm.ws.security.auth.ContextManagerImpl.getServerSubjectInternal(ContextManagerImpl.java:2180) at com.ibm.ws.security.auth.ContextManagerImpl.getServerSubjectInternal(ContextManagerImpl.java:1972) at com.ibm.ws.security.auth.ContextManagerImpl.initialize(ContextManagerImpl.java:2530) at com.ibm.ws.security.auth.ContextManagerImpl.initialize(ContextManagerImpl.java:2560) at com.ibm.ws.security.core.SecurityContext.enable(SecurityContext.java:83) at com.ibm.ws.security.core.distSecurityComponentImpl.initialize(distSecurityComponentImpl.java:379) at com.ibm.ws.security.core.distSecurityComponentImpl.startSecurity(distSecurityComponentImpl.java:336) at com.ibm.ws.security.core.SecurityComponentImpl.startSecurity(SecurityComponentImpl.java:105) at com.ibm.ws.security.core.ServerSecurityComponentImpl.start(ServerSecurityComponentImpl.java:283) at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:977) at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:673) at com.ibm.ws.runtime.component.ApplicationServerImpl.start(ApplicationServerImpl.java:197) at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:977) at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:673) at com.ibm.ws.runtime.component.ServerImpl.start(ServerImpl.java:526) at com.ibm.ws.runtime.WsServerImpl.bootServerContainer(WsServerImpl.java:192) at com.ibm.ws.runtime.WsServerImpl.start(WsServerImpl.java:140) at com.ibm.ws.runtime.WsServerImpl.main(WsServerImpl.java:461) at com.ibm.ws.runtime.WsServer.main(WsServer.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at com.ibm.wsspi.bootstrap.WSLauncher.launchMain(WSLauncher.java:183) at com.ibm.wsspi.bootstrap.WSLauncher.main(WSLauncher.java:90) at com.ibm.wsspi.bootstrap.WSLauncher.run(WSLauncher.java:72) at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformActivator.java:78) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:92) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:68) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:400) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at org.eclipse.core.launcher.Main.invokeFramework(Main.java:336) at org.eclipse.core.launcher.Main.basicRun(Main.java:280) at org.eclipse.core.launcher.Main.run(Main.java:977) at com.ibm.wsspi.bootstrap.WSPreLauncher.launchEclipse(WSPreLauncher.java:329) at com.ibm.wsspi.bootstrap.WSPreLauncher.main(WSPreLauncher.java:92) Dump of callerThis = Object type = com.ibm.ws.security.registry.UserRegistryImpl com.ibm.ws.security.registry.UserRegistryImpl@68a068a0 Anybody maybe has a hint on this? I followed the exact steps described in the IBM Infocenter for setting this up. Thanks in advance for the help.

    Read the article

  • fatal error occured while trying to sysprep the machine windows 8.1

    - by Mick
    I try do sysprep in Windows 8.1 I have create unattend.xml <settings pass="oobeSystem"> <component name="Microsoft-Windows-International-Core" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <InputLocale>en-US</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <OEMInformation> <Manufacturer>XYZ</Manufacturer> <SupportURL>http://www.XYZ.com</SupportURL> </OEMInformation> <OOBE> <HideEULAPage>true</HideEULAPage> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> </OOBE> <UserAccounts> <AdministratorPassword> <Value>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</Value> <PlainText>false</PlainText> </AdministratorPassword> <LocalAccounts> <LocalAccount wcm:action="add"> <Password> <Value>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</Value> <PlainText>false</PlainText> </Password> <Description>Admin</Description> <DisplayName>Admin</DisplayName> <Group>Administrators</Group> <Name>Admin</Name> </LocalAccount> </LocalAccounts> </UserAccounts> <WindowsFeatures> <ShowWindowsMediaPlayer>false</ShowWindowsMediaPlayer> <ShowMediaCenter>false</ShowMediaCenter> </WindowsFeatures> <RegisteredOrganization>XXXXXXXXXXXXXXXXXXXXX</RegisteredOrganization> <RegisteredOwner>XXXXXXXXXXXXXXXXXXXXXXX</RegisteredOwner> <TimeZone>Central European Standard Time</TimeZone> <ShowWindowsLive>false</ShowWindowsLive> </component> </settings> <settings pass="specialize"> <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <RegisteredOrganization>XXXXXXXXXXXXXXXXXXXXXXXXX</RegisteredOrganization> <RegisteredOwner>XXXXXXXXXXXXXXXXXXXXXXXXXX</RegisteredOwner> <ProductKey>XXXXXXXXXXXXXXXXXXXXXXXXXXXXX</ProductKey> </component> </settings> And then I run sysprep.exe /oobe /generalize /shutdown I see this error: fatal error occurred while trying to sysprep the machine

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • How to Reduce the Size of Your WinSXS Folder on Windows 7 or 8

    - by Chris Hoffman
    The WinSXS folder at C:\Windows\WinSXS is massive and continues to grow the longer you have Windows installed. This folder builds up unnecessary files over time, such as old versions of system components. This folder also contains files for uninstalled, disabled Windows components. Even if you don’t have a Windows component installed, it will be present in your WinSXS folder, taking up space. Why the WinSXS Folder Gets to Big The WinSXS folder contains all Windows system components. In fact, component files elsewhere in Windows are just links to files contained in the WinSXS folder. The WinSXS folder contains every operating system file. When Windows installs updates, it drops the new Windows component in the WinSXS folder and keeps the old component in the WinSXS folder. This means that every Windows Update you install increases the size of your WinSXS folder. This allows you to uninstall operating system updates from the Control Panel, which can be useful in the case of a buggy update — but it’s a feature that’s rarely used. Windows 7 dealt with this by including a feature that allows Windows to clean up old Windows update files after you install a new Windows service pack. The idea was that the system could be cleaned up regularly along with service packs. However, Windows 7 only saw one service pack — Service Pack 1 — released in 2010. Microsoft has no intention of launching another. This means that, for more than three years, Windows update uninstallation files have been building up on Windows 7 systems and couldn’t be easily removed. Clean Up Update Files To fix this problem, Microsoft recently backported a feature from Windows 8 to Windows 7. They did this without much fanfare — it was rolled out in a typical minor operating system update, the kind that don’t generally add new features. To clean up such update files, open the Disk Cleanup wizard (tap the Windows key, type “disk cleanup” into the Start menu, and press Enter). Click the Clean up System Files button, enable the Windows Update Cleanup option and click OK. If you’ve been using your Windows 7 system for a few years, you’ll likely be able to free several gigabytes of space. The next time you reboot after doing this, Windows will take a few minutes to clean up system files before you can log in and use your desktop. If you don’t see this feature in the Disk Cleanup window, you’re likely behind on your updates — install the latest updates from Windows Update. Windows 8 and 8.1 include built-in features that do this automatically. In fact, there’s a StartComponentCleanup scheduled task included with Windows that will automatically run in the background, cleaning up components 30 days after you’ve installed them. This 30-day period gives you time to uninstall an update if it causes problems. If you’d like to manually clean up updates, you can also use the Windows Update Cleanup option in the Disk Usage window, just as you can on Windows 7. (To open it, tap the Windows key, type “disk cleanup” to perform a search, and click the “Free up disk space by removing unnecessary files” shortcut that appears.) Windows 8.1 gives you more options, allowing you to forcibly remove all previous versions of uninstalled components, even ones that haven’t been around for more than 30 days. These commands must be run in an elevated Command Prompt — in other words, start the Command Prompt window as Administrator. For example, the following command will uninstall all previous versions of components without the scheduled task’s 30-day grace period: DISM.exe /online /Cleanup-Image /StartComponentCleanup The following command will remove files needed for uninstallation of service packs. You won’t be able to uninstall any currently installed service packs after running this command: DISM.exe /online /Cleanup-Image /SPSuperseded The following command will remove all old versions of every component. You won’t be able to uninstall any currently installed service packs or updates after this completes: DISM.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase Remove Features on Demand Modern versions of Windows allow you to enable or disable Windows features on demand. You’ll find a list of these features in the Windows Features window you can access from the Control Panel. Even features you don’t have installed — that is, the features you see unchecked in this window — are stored on your hard drive in your WinSXS folder. If you choose to install them, they’ll be made available from your WinSXS folder. This means you won’t have to download anything or provide Windows installation media to install these features. However, these features take up space. While this shouldn’t matter on typical computers, users with extremely low amounts of storage or Windows server administrators who want to slim their Windows installs down to the smallest possible set of system files may want to get these files off their hard drives. For this reason, Windows 8 added a new option that allows you to remove these uninstalled components from the WinSXS folder entirely, freeing up space. If you choose to install the removed components later, Windows will prompt you to download the component files from Microsoft. To do this, open a Command Prompt window as Administrator. Use the following command to see the features available to you: DISM.exe /Online /English /Get-Features /Format:Table You’ll see a table of feature names and their states. To remove a feature from your system, you’d use the following command, replacing NAME with the name of the feature you want to remove. You can get the feature name you need from the table above. DISM.exe /Online /Disable-Feature /featurename:NAME /Remove If you run the /GetFeatures command again, you’ll now see that the feature has a status of “Disabled with Payload Removed” instead of just “Disabled.” That’s how you know it’s not taking up space on your computer’s hard drive. If you’re trying to slim down a Windows system as much as possible, be sure to check out our lists of ways to free up disk space on Windows and reduce the space used by system files.     

    Read the article

  • Software Architecture: Quality Attributes

    Quality is what all software engineers should strive for when building a new system or adding new functionality. Dictonary.com ambiguously defines quality as a grade of excellence. Unfortunately, quality must be defined within the context of a situation in that each engineer must extract quality attributes from a project’s requirements. Because quality is defined by project requirements the meaning of quality is constantly changing base on the project. Software architecture factors that indicate the relevance and effectiveness The relevance and effectiveness of architecture can vary based on the context in which it was conceived and the quality attributes that are required to meet. Typically when evaluating architecture for a specific system regarding relevance and effectiveness the following questions should be asked.   Architectural relevance and effectiveness questions: Does the architectural concept meet the needs of the system for which it was designed? Out of the competing architectures for a system, which one is the most suitable? If we look at the first question regarding meeting the needs of a system for which it was designed. A system that answers yes to this question must meet all of its quality goals. This means that it consistently meets or exceeds performance goals for the system. In addition, the system meets all the other required system attributers based on the systems requirements. The suitability of a system is based on several factors. In order for a project to be suitable the necessary resources must be available to complete the task. Standard Project Resources: Money Trained Staff Time Life cycle factors that affect the system and design The development life cycle used on a project can drastically affect how a system’s architecture is created as well as influence its design. In the case of using the software development life cycle (SDLC) each phase must be completed before the next can begin.  This waterfall approach does not allow for changes in a system’s architecture after that phase is completed. This can lead to major system issues when the architecture for the system is not as optimal because of missed quality attributes. This can occur when a project has poor requirements and makes misguided architectural decisions to name a few examples. Once the architectural phase is complete the concepts established in this phase must move on to the design phase that is bound to use the concepts and guidelines defined in the previous phase regardless of any missing quality attributes needed for the project. If any issues arise during this phase regarding the selected architectural concepts they cannot be corrected during the current project. This directly has an effect on the design of a system because the proper qualities required for the project where not used when the architectural concepts were approved. When this is identified nothing can be done to fix the architectural issues and system design must use the existing architectural concepts regardless of its missing quality properties because the architectural concepts for the project cannot be altered. The decisions made in the design phase then preceded to fall down to the implementation phase where the actual system is coded based on the approved architectural concepts established in the architecture phase regardless of its architectural quality. Conversely projects using more of an iterative or agile methodology to implement a system has more flexibility to correct architectural decisions based on missing quality attributes. This is due to each phase of the SDLC is executed more than once so any issues identified in architecture of a system can be corrected in the next architectural phase. Subsequently the corresponding changes will then be adjusted in the following design phase so that when the project is completed the optimal architectural and design decision are applied to the solution. Architecture factors that indicate functional suitability Systems that have function shortcomings do not have the proper functionality based on the project’s driving quality attributes. What this means in English is that the system does not live up to what is required of it by the stakeholders as identified by the missing quality attributes and requirements. One way to prevent functional shortcomings is to test the project’s architecture, design, and implementation against the project’s driving quality attributes to ensure that none of the attributes were missed in any of the phases. Another way to ensure a system has functional suitability is to certify that all its requirements are fully articulated so that there is no chance for misconceptions or misinterpretations by all stakeholders. This will help prevent any issues regarding interpreting the system requirements during the initial architectural concept phase, design phase and implementation phase. Consider the applicability of other architectural models When considering an architectural model for a project is also important to consider other alternative architectural models to ensure that the model that is selected will meet the systems required functionality and high quality attributes. Recently I can remember talking about a project that I was working on and a coworker suggested a different architectural approach that I had never considered. This new model will allow for the same functionally that is offered by the existing model but will allow for a higher quality project because it fulfills more quality attributes. It is always important to seek alternatives prior to committing to an architectural model. Factors used to identify high-risk components A high risk component can be defined as a component that fulfills 2 or more quality attributes for a system. An example of this can be seen in a web application that utilizes a remote database. One high-risk component in this system is the TCIP component because it allows for HTTP connections to handle by a web server and as well as allows for the server to also connect to a remote database server so that it can import data into the system. This component allows for the assurance of data quality attribute and the accessibility quality attribute because the system is available on the network. If for some reason the TCIP component was to fail the web application would fail on two quality attributes accessibility and data assurance in that the web site is not accessible and data cannot be update as needed. Summary As stated previously, quality is what all software engineers should strive for when building a new system or adding new functionality. The quality of a system can be directly determined by how closely it is implemented when compared to its desired quality attributes. One way to insure a higher quality system is to enforce that all project requirements are fully articulated so that no assumptions or misunderstandings can be made by any of the stakeholders. By doing this a system has a better chance of becoming a high quality system based on its quality attributes

    Read the article

  • Oracle Utilities Application Framework future feature deprecation

    - by Paula Speranza-Hadley
    From time to time, existing functionality is replaced with alternative features to offer greater flexibility and standardization. In Oracle Utilities Application Framework V4.2.0.0.0 the following features are being announced for deprecation in the next release or have been previously announced and are not being delivered with this version of the Oracle Utilities Application Framework: ·         No SQL Server Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with any support for SQL Server. ·         No MPL Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with the Multi-Purpose Listener (MPL) component of the XML Application Integration (XAI) component. Customers using the MPL should migrate to Oracle Service Bus. ·         No provided Crystal Reports/Business Objects Interface – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with a supported Crystal Reports/Business Objects Interface. This facility is now available as downloadable customization for existing or new customers. Responsibility for maintenance and new features is now individual customer's responsibility. ·         XAI Servlet deprecation – The XAI Servlet (xaiserver and classicxai) will be removed in the next release of the Oracle Utilities Application Framework. Customers are encouraged to migrate to the native Web Services Support as outlined in XAI Best Practices whitepaper available from My Oracle Support (Doc Id: 942074.1). ·         ConfigLab deprecation – The ConfigLab facility will be removed in the next release of Oracle Utilities Application Framework for products it is shipped with. Customers are recommended to migrate to the Configuration Migration Assistant which provides the same and more functionality.   ·         Archiving deprecation – The inbuilt Archiving has been removed from Oracle Utilities Application Framework V4.2.0.0.0 or above, for products it is shipped with. Customers considering Archiving solution should migrate to the Information Lifecycle Management based solution provided for your product. ·         DISTRIBUTED batch execution mode deprecation – The DISTRIBUTED execution mode used by the batch component of the Oracle Utilities Application Framework will be deprecated in the next release of the Oracle Utilities Application Framework. Customers using DISTRUBUTED mode should migrate to CLUSTERED mode as outlined in the Batch Best Practices For Oracle Utilities Application Framework Based Products whitepaper available from My Oracle Support (Doc Id: 836362.1). ·         XAI Schema Editor deprecation – The XAI Schema Editor which is a component of the Oracle Utilities Software Development Kit will be removed in the next release of the Oracle Utilities Application Framework. Customers should migrate their existing schemas to Business Object based schemas and use the browser based Schema Editor instead.  

    Read the article

  • Client side code snipets

    - by raghu.yadav
    function clientMethodCall(event) { component = event.getSource(); AdfCustomEvent.queue(component, "customEvent",{payload:component.getSubmittedValue()}, true); event.cancel(); } ]]-- <af:document>      <f:facet name="metaContainer">      <af:group>        <!--[CDATA[            <script>                function clientMethodCall(event) {                                       component = event.getSource();                    AdfCustomEvent.queue(component, "customEvent",{payload:component.getSubmittedValue()}, true);                                                     event.cancel();                                    }                 </script> ]]-->      </af:group>    </f:facet>      <af:form>        <af:panelformlayout>          <f:facet name="footer">          <af:inputtext label="Let me spy on you: Please enter your mail password">            <af:clientlistener method="clientMethodCall" type="keyUp">            <af:serverlistener type="customEvent" method="#{customBean.handleRequest}">          </af:serverlistener>bean code    public void handleRequest(ClientEvent event){                System.out.println("---"+event.getParameters().get("payload"));            } tree<af:tree id="tree1" value="#{bindings.DepartmentsView11.treeModel}" var="node" selectionlistener="#{bindings.DepartmentsView11.treeModel.makeCurrent}" rowselection="single">    <f:facet name="nodeStamp">      <af:outputtext value="#{node}">    </af:outputtext>    <af:clientlistener method="expandNode" type="selection">  </af:clientlistener></f:facet>   <f:facet name="metaContainer">        <af:group>          <!--[CDATA[            <script>                function expandNode(event){                    var _tree = event.getSource();                    rwKeySet = event.getAddedSet();                    var firstRowKey;                    for(rowKey in rwKeySet){                       firstRowKey  = rowKey;                        // we are interested in the first hit, so break out here                        break;                    }                    if (_tree.isPathExpanded(firstRowKey)){                         _tree.setDisclosedRowKey(firstRowKey,false);                    }                    else{                        _tree.setDisclosedRowKey(firstRowKey,true);                    }               }        </script> ]]-->        </af:group>      </f:facet>   </af:tree> </af:clientlistener></af:inputtext></f:facet></af:panelformlayout></af:form></af:document> bean code public void handleRequest(ClientEvent event){ System.out.println("---"+event.getParameters().get("payload")); } tree function expandNode(event){ var _tree = event.getSource(); rwKeySet = event.getAddedSet(); var firstRowKey; for(rowKey in rwKeySet){ firstRowKey = rowKey; // we are interested in the first hit, so break out here break; } if (_tree.isPathExpanded(firstRowKey)){ _tree.setDisclosedRowKey(firstRowKey,false); } else{ _tree.setDisclosedRowKey(firstRowKey,true); } } ]]--

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • How-to delete a tree node using the context menu

    - by frank.nimphius
    Hierarchical trees in Oracle ADF make use of View Accessors, which means that only the top level node needs to be exposed as a View Object instance on the ADF Business Components Data Model. This also means that only the top level node has a representation in the PageDef file as a tree binding and iterator binding reference. Detail nodes are accessed through tree rule definitions that use the accessor mentioned above (or nested collections in the case of POJO or EJB business services). The tree component is configured for single node selection, which however can be declaratively changed for users to press the ctrl key and selecting multiple nodes. In the following, I explain how to create a context menu on the tree for users to delete the selected tree nodes. For this, the context menu item will access a managed bean, which then determines the selected node(s), the internal ADF node bindings and the rows they represent. As mentioned, the ADF Business Components Data Model only needs to expose the top level node data sources, which in this example is an instance of the Locations View Object. For the tree to work, you need to have associations defined between entities, which usually is done for you by Oracle JDeveloper if the database tables have foreign keys defined Note: As a general hint of best practices and to simplify your life: Make sure your database schema is well defined and designed before starting your development project. Don't treat the database as something organic that grows and changes with the requirements as you proceed in your project. Business service refactoring in response to database changes is possible, but should be treated as an exception, not the rule. Good database design is a necessity – even for application developers – and nothing evil. To create the tree component, expand the Data Controls panel and drag the View Object collection to the view. From the context menu, select the tree component entry and continue with defining the tree rules that make up the hierarchical structure. As you see, when pressing the green plus icon  in the Edit Tree Binding  dialog, the data structure, Locations -  Departments – Employees in my sample, shows without you having created a View Object instance for each of the nodes in the ADF Business Components Data Model. After you configured the tree structure in the Edit Tree Binding dialog, you press OK and the tree is created. Select the tree in the page editor and open the Structure Window (ctrl+shift+S). In the Structure window, expand the tree node to access the conextMenu facet. Use the right mouse button to insert a Popup  into the facet. Repeat the same steps to insert a Menu and a Menu Item into the Popup you created. The Menu item text should be changed to something meaningful like "Delete". Note that the custom menu item later is added to the context menu together with the default context menu options like expand and expand all. To define the action that is executed when the menu item is clicked on, you select the Action Listener property in the Property Inspector and click the arrow icon followed by the Edit menu option. Create or select a managed bean and define a method name for the action handler. Next, select the tree component and browse to its binding property in the Property Inspector. Again, use the arrow icon | Edit option to create a component binding in the same managed bean that has the action listener defined. The tree handle is used in the action listener code, which is shown below: public void onTreeNodeDelete(ActionEvent actionEvent) {   //access the tree from the JSF component reference created   //using the af:tree "binding" property. The "binding" property   //creates a pair of set/get methods to access the RichTree instance   RichTree tree = this.getTreeHandler();   //get the list of selected row keys   RowKeySet rks = tree.getSelectedRowKeys();   //access the iterator to loop over selected nodes   Iterator rksIterator = rks.iterator();          //The CollectionModel represents the tree model and is   //accessed from the tree "value" property   CollectionModel model = (CollectionModel) tree.getValue();   //The CollectionModel is a wrapper for the ADF tree binding   //class, which is JUCtrlHierBinding   JUCtrlHierBinding treeBinding =                  (JUCtrlHierBinding) model.getWrappedData();          //loop over the selected nodes and delete the rows they   //represent   while(rksIterator.hasNext()){     List nodeKey = (List) rksIterator.next();     //find the ADF node binding using the node key     JUCtrlHierNodeBinding node =                       treeBinding.findNodeByKeyPath(nodeKey);     //delete the row.     Row rw = node.getRow();       rw.remove();   }          //only refresh the tree if tree nodes have been selected   if(rks.size() > 0){     AdfFacesContext adfFacesContext =                          AdfFacesContext.getCurrentInstance();     adfFacesContext.addPartialTarget(tree);   } } Note: To enable multi node selection for a tree, select the tree and change the row selection setting from "single" to "multiple". Note: a fully pictured version of this post will become available at the end of the month in a PDF summary on ADF Code Corner : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html 

    Read the article

  • How to ensure custom serverListener events fires before action events

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Using JavaScript in ADF Faces you can queue custom events defined by an af:serverListener tag. If the custom event however is queued from an af:clientListener on a command component, then the command component's action and action listener methods fire before the queued custom event. If you have a use case, for example in combination with client side integration of 3rd party technologies like HTML, Applets or similar, then you want to change the order of execution. The way to change the execution order is to invoke the command item action from the client event method that handles the custom event propagated by the af:serverListener tag. The following four steps ensure your successful doing this 1.       Call cancel() on the event object passed to the client JavaScript function invoked by the af:clientListener tag 2.       Call the custom event as an immediate action by setting the last argument in the custom event call to true function invokeCustomEvent(evt){   evt.cancel();          var custEvent = new AdfCustomEvent(                         evt.getSource(),                         "mycustomevent",                                                                                                                    {message:"Hello World"},                         true);    custEvent.queue(); } 3.       When handling the custom event on the server, lookup the command item, for example a button, to queue its action event. This way you simulate a user clicking the button. Use the following code ActionEvent event = new ActionEvent(component); event.setPhaseId(PhaseId.INVOKE_APPLICATION); event.queue(); The component reference needs to be changed with the handle to the command item which action method you want to execute. 4.       If the command component has behavior tags, like af:fileDownloadActionListener, or af:setPropertyListener, defined, then these are also executed when the action event is queued. However, behavior tags, like the file download action listener, may require a full page refresh to be issued to work, in which case the custom event cannot be issued as a partial refresh. File download action tag: http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_fileDownloadActionListener.html " Since file downloads must be processed with an ordinary request - not XMLHttp AJAX requests - this tag forces partialSubmit to be false on the parent component, if it supports that attribute." To issue a custom event as a non-partial submit, the previously shown sample code would need to be changed as shown below function invokeCustomEvent(evt){   evt.cancel();          var custEvent = new AdfCustomEvent(                         evt.getSource(),                         "mycustomevent",                                                                                                                    {message:"Hello World"},                         true);    custEvent.queue(false); } To learn more about custom events and the af:serverListener, please refer to the tag documentation: http://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e12419/tagdoc/af_serverListener.html

    Read the article

  • New TPerlRegEx Compatible with Delphi XE

    - by Jan Goyvaerts
    The new RegularExpressionsCore unit in Delphi XE is based on the PerlRegEx unit that I wrote many years ago. Since I donated full rights to a copy rather than full rights to the original, I can continue to make my version of TPerlRegEx available to people using older versions of Delphi. I did make a few changes to the code to modernize it a bit prior to donating a copy to Embarcadero. The latest TPerlRegEx includes those changes. This allows you to use the same regex-based code using the RegularExpressionsCore unit in Delphi XE, and the PerlRegEx unit in Delphi 2010 and earlier. If you’re writing new code using regular expressions in Delphi 2010 or earlier, I strongly recomment you use the new version of my PerlRegEx unit. If you later migrate your code to Delphi XE, all you have to do is replace PerlRegEx with RegularExrpessionsCore in the uses clause of your units. If you have code written using an older version of TPerlRegEx that you want to migrate to the latest TPerlRegEx, you’ll need to take a few changes into account. The original TPerlRegEx was developed when Borland’s goal was to have a component for everything on the component palette. So the old TPerlRegEx derives from TComponent, allowing you to put it on the component palette and drop it on a form. The new TPerlRegEx derives from TObject. It can only be instantiated at runtime. If you want to migrate from an older version of TPerlRegEx to the latest TPerlRegEx, start with removing any TPerlRegEx components you may have placed on forms or data modules and instantiate the objects at runtime instead. When instantiating at runtime, you no longer need to pass an owner component to the Create() constructor. Simply remove the parameter. Some of the property and method names in the original TPerlRegEx were a bit unwieldy. These have been renamed in the latest TPerlRegEx. Essentially, in all identifiers SubExpression was replaced with Group and MatchedExpression was replaced with Matched. Here is a complete list of the changed identifiers: Old Identifier New Identifier StoreSubExpression StoreGroups NamedSubExpression NamedGroup MatchedExpression MatchedText MatchedExpressionLength MatchedLength MatchedExpressionOffset MatchedOffset SubExpressionCount GroupCount SubExpressions Groups SubExpressionLengths GroupLengths SubExpressionOffsets GroupOffsets Download TPerlRegEx. Source is included under the MPL 1.1 license.

    Read the article

  • Combining Shared Secret and Username Token – Azure Service Bus

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also flow through a username token so that in your listening WCF service you will be able to identify who sent the message. This could either be in the form of an application or a user depending on how you want to use your token. Prerequisites Before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.UN.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our username token like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element. Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the username token bits in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use userNameAuthentication and you can see that I have created my own custom username token validator.   This setup means that WCF will hand off to my class for validating the username token details. I have also added the serviceSecurityAudit element to give me a simple auditing of access capability. My UsernamePassword Validator The below picture shows you the details of the username password validator class I have implemented. WCF will hand off to this class when validating the token and give me a nice way to check the token credentials against an on-premise store. You have all of the validation features with a non-service bus WCF implementation available such as validating the username password against active directory or ASP.net membership features or as in my case above something much simpler.   The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a username token over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding. Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the username token with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This is the normal configuration to use a shared secret for accessing a Service Bus endpoint.   Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF service but this time we have also set the ClientCredentials to use the appropriate username and password which will be flown through the service bus and to our service which will validate them.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and username token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.UN.zip

    Read the article

  • Ensuring that saved data has not been edited in a game with both offline and online components

    - by Omar Kooheji
    I'm in the pre-planning phase of coming up with a game design and I was wondering if there was a sensible way to stop people from editing saves in a game with offline and online components. The offline component would allow the player to play through the game and the online component would allow them to play against other players, so I would need to make sure that people hadn't edited the source code/save files while offline to gain an advantage while online. Game likely to be developed in either .Net or Java, both of which are unfortunately easy to decompile.

    Read the article

  • get stick analogue XY position using Jinput in lwjgl

    - by oIrC
    i want to capture the movement of the analogue stick of the gamePad. is there any equivalent function to this public void mouseMoved(MouseEvent mouseEvent) { mouseEvent.getX(); //return the X coordinate of the cursor inside a component mouseEvent.getY();//return the Y coordinate of the cursor inside a component } into lwjgl.input.Controllers, i found controller.getAxisValue() but this one doesn't work as the function above. any help? thanks.

    Read the article

  • WMI permissions: Select CommandLine, ProcessId FROM Win32_Process returns no data for CommandLine

    - by user57935
    Hi all, I am gathering performance data via WMI and would like to avoid having to use an account in the Administrators group for this purpose. The target machine is running Windows Server 2003 with the latest SP/updates. I've done what I believe to be the appropriate configuration to allow our user access to WMI (similar to what is described here: http://msdn.microsoft.com/en-us/library/aa393266.aspx). Here are the specific steps that were followed: Open Administrative Tools - Computer Management: Under Computer Management (Local) Expand Services and Applications, right click WMI Control and select properties. In the Security tab, expand Root, highlight CIMV2, click Security (near bottom of window); add Performance Monitor Users and enable the options : Enable Account and Remote Enable. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - Right click My Computer and select properties, select the COM security tab, in “Access Permissions” click "Edit Default" select(or add then select) “Performance Monitor Users” group and allow local access and remote access and click ok. In “Launch and Activation Permissions” click “Edit Default” select(or add then select) “Performance Monitor Users” group and allow Local and Remote Launch and Activation Permissions. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - My Computer - DCOM Config - highlight “Windows Management and Instrumentation” right click and select properties, Select the Security tab, Under “Launch and Activation Permissions” select Customize, then click edit, add the “Performance Users Group” and allow local and remote Remote Launch and Remote Activation privileges. I am able to connect remotely via WMI Explorer but when I perform this query: Select CommandLine, ProcessId FROM Win32_Process I get a valid result but every row has an empty CommandLine. If I add the user to the Administrators group and re-run the query, the CommandLine column contains the expected data. It seems there is a permission I am missing somewhere but I am not having much luck tracking it down. Many thanks in advance.

    Read the article

  • WMI permissions: Select CommandLine, ProcessId FROM Win32_Process returns no data for CommandLine

    - by user57935
    I am gathering performance data via WMI and would like to avoid having to use an account in the Administrators group for this purpose. The target machine is running Windows Server 2003 with the latest SP/updates. I've done what I believe to be the appropriate configuration to allow our user access to WMI (similar to what is described here: http://msdn.microsoft.com/en-us/library/aa393266.aspx). Here are the specific steps that were followed: Open Administrative Tools - Computer Management: Under Computer Management (Local) Expand Services and Applications, right click WMI Control and select properties. In the Security tab, expand Root, highlight CIMV2, click Security (near bottom of window); add Performance Monitor Users and enable the options : Enable Account and Remote Enable. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - Right click My Computer and select properties, select the COM security tab, in “Access Permissions” click "Edit Default" select(or add then select) “Performance Monitor Users” group and allow local access and remote access and click ok. In “Launch and Activation Permissions” click “Edit Default” select(or add then select) “Performance Monitor Users” group and allow Local and Remote Launch and Activation Permissions. ­Open Administrative Tools - Component Services: Under Console Root go to Component Services- Computers - My Computer - DCOM Config - highlight “Windows Management and Instrumentation” right click and select properties, Select the Security tab, Under “Launch and Activation Permissions” select Customize, then click edit, add the “Performance Users Group” and allow local and remote Remote Launch and Remote Activation privileges. I am able to connect remotely via WMI Explorer but when I perform this query: Select CommandLine, ProcessId FROM Win32_Process I get a valid result but every row has an empty CommandLine. If I add the user to the Administrators group and re-run the query, the CommandLine column contains the expected data. It seems there is a permission I am missing somewhere but I am not having much luck tracking it down. Many thanks in advance.

    Read the article

  • How to set TV-out options under Linux of an Geforce 9600 GT video card

    - by polemon
    I'm using the TV-out connector of my Geforce 9600 GT to connect it to an old TV set. It's obviously in Composite mode, the other two cables of Component video are dead, only Pb/VIDEO labeled one gives me a signal. The picture appears black/white on the TV, I presume it's because the video card gives me an NTSC signal, but it's a PAL tv set. How do I change the TV-out from NTSC to PAL? My Component to SCART adapter hasn't arrived yet, but I think I should be able to set manually, whether the signal should be Composite or Component. How do I switch modes of the TV-out, between Component and Composite? I'm running Linux, so it's probably some settings I need to make in xorg.conf. Edit: I got this far: I need to set in the "Device" section of my xorg.conf: Option "TVStandard" "PAL-B" Option "TVOutFormat" "COMPOSITE" The whole section looks like this now: Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9600 GT" Option "AddARGBGLXVisuals" "True" Option "TVStandard" "PAL-B" Option "TVOutFormat" "COMPOSITE" EndSection How can I list all available settings for "TVStandard" and "TVOutFormat"?

    Read the article

  • Server 2008R2 Server Manager Roles and Features won't refresh or allow addition of new roles or features

    - by MattChorba
    I have a standalone DC in an isolated lab. I have installed the SUR tool and found no errors. I ran SFC and found no errors. I have attempted to install Windows Backup feature using Powershell, but received the same error about the computer needing to be restarted. Powershell cmdlets will list all of the installed roles and features. The rest of Server Manager works without problems. What can I do to get Server Manager Roles and Features working properly again? Picture of Error: CheckSUR.log: ================================= Checking System Update Readiness. Binary Version 6.1.7601.21645 Package Version 13.0 2011-11-28 13:20 Checking Windows Servicing Packages Checking Package Manifests and Catalogs Checking Package Watchlist Checking Component Watchlist Checking Packages Checking Component Store Summary: Seconds executed: 413 No errors detected (w) Unable to get system disk properties 0x0000045D IOCTL_STORAGE_QUERY_PROPERTY Disk Cache CheckSUR.persist.log: ================================= Checking System Update Readiness. Binary Version 6.1.7601.21645 Package Version 13.0 2011-11-28 13:20 Checking Windows Servicing Packages Checking Package Manifests and Catalogs Checking Package Watchlist Checking Component Watchlist Checking Packages Checking Component Store Summary: Seconds executed: 413 No errors detected (w) Unable to get system disk properties 0x0000045D IOCTL_STORAGE_QUERY_PROPERTY Disk Cache

    Read the article

  • Troubleshooting DTCPing Errors

    - by JimmyP
    So I am running DTC ping between 2 machines on our network and am getting the following error ++++++++++++++++++++++++++++++++++++++++++++++ DTCping 1.9 Report for WEB2 ++++++++++++++++++++++++++++++++++++++++++++++ RPC server is ready ++++++++++++Validating Remote Computer Name++++++++++++ 03-03, 13:39:45.099-->Start DTC connection test Name Resolution: internal-->10.20.3.236-->internal.something 03-03, 13:39:45.114-->Start RPC test (WEB2-->internal) Problem:fail to invoke remote RPC method Error(0x6BA) at dtcping.cpp @303 -->RPC pinging exception -->1722(The RPC server is unavailable.) RPC test failed I have also run RPC ping where I get what I beleive is the same error: C:\Program Files\Windows Resource Kits\Tools>rpcping -s internal Exception 1722 (0x000006BA) Number of records is: 4 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1722 Detection location is 323 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1237 Detection location is 313 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 311 Flags is 0 NumberOfParameters is 3 Long val: 135 Pointer val: 0 Pointer val: 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 318 Flags is 0 NumberOfParameters is 0 I'm pretty sure that the exception number 1722 is the key but I can't find any info about it. There may be a firewall with ports that need opening between the machines which I am checking with our sys admins now. But I can do a regular ping between the machines. Other than that I am reading a lot of articles talking about OS services and components I know nothing about and am having trouble finding any info on. Can anyone shed any light on this? FYI the machine is running Windows Server 2003 RS SP2.

    Read the article

  • Spring, JPA, Hibernate, Jetty 7 Integration

    - by Jewel
    Have anyone successfully run any spring and JPA application in jetty 7? I am getting following exception. This application throws no error in jetty 6. INFO [main] org.eclipse.jetty.util.log - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.eclipse.jetty.util.log) via org.eclipse.jetty.util.log.Slf4jLog INFO [main] org.eclipse.jetty.util.log - jetty-7.1.2.v20100523 INFO [main] org.eclipse.jetty.util.log - Deployment monitor G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\contexts at interval 5 INFO [main] org.eclipse.jetty.util.log - Deployment monitor G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\webapps at interval 5 INFO [main] org.eclipse.jetty.util.log - Deployable added: G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\webapps\gwtrpc-spring.war INFO [main] org.eclipse.jetty.util.log - Copying WEB-INF/lib jar:file:/G:/_Java/_Jetty/jetty-distribution-7.1.2.v20100523/webapps/gwtrpc-spring.war!/WEB-INF/lib/ to C:\Documents and Settings\Jewel2\Local Settings\Temp\Jetty_0_0_0_0_8080_gwtrpc.spring.war__gwtrpc.spring__az1wdj\webinf\WEB-INF\lib INFO [main] org.eclipse.jetty.util.log - Copying WEB-INF/classes from jar:file:/G:/_Java/_Jetty/jetty-distribution-7.1.2.v20100523/webapps/gwtrpc-spring.war!/WEB-INF/classes/ to C:\Documents and Settings\Jewel2\Local Settings\Temp\Jetty_0_0_0_0_8080_gwtrpc.spring.war__gwtrpc.spring__az1wdj\webinf\WEB-INF\classes INFO [main] /gwtrpc-spring - Initializing Spring root WebApplicationContext INFO [main] org.springframework.web.context.ContextLoader - Root WebApplicationContext: initialization started INFO [main] org.springframework.web.context.support.XmlWebApplicationContext - Refreshing Root WebApplicationContext: startup date [Thu Jun 10 00:13:32 GMT+06:00 2010]; root of context hierarchy INFO [main] org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from ServletContext resource [/WEB-INF/applicationContext.xml] INFO [main] org.springframework.beans.factory.support.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@467991: defining beans [org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalPersistenceAnnotationProcessor,greetingServiceImpl,testService,testService2,taskExecutor,dataSource,entityManagerFactory,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager,org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor#0]; root of factory hierarchy INFO [main] org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Initializing ExecutorService 'taskExecutor' INFO [main] org.springframework.jdbc.datasource.DriverManagerDataSource - Loaded JDBC driver: com.mysql.jdbc.Driver INFO [main] org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean - Building JPA container EntityManagerFactory for persistence unit 'gwtrpc-spring-data-source' INFO [main] org.hibernate.cfg.annotations.Version - Hibernate Annotations 3.4.0.GA INFO [main] org.hibernate.cfg.Environment - Hibernate 3.3.1.GA INFO [main] org.hibernate.cfg.Environment - hibernate.properties not found INFO [main] org.hibernate.cfg.Environment - Bytecode provider name : javassist INFO [main] org.hibernate.cfg.Environment - using JDK 1.4 java.sql.Timestamp handling INFO [main] org.hibernate.annotations.common.Version - Hibernate Commons Annotations 3.1.0.GA INFO [main] org.hibernate.ejb.Version - Hibernate EntityManager 3.4.0.GA INFO [main] org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: gwtrpc-spring-data-source ...] INFO [main] org.springframework.beans.factory.support.DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@467991: defining beans [org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalPersistenceAnnotationProcessor,greetingServiceImpl,testService,testService2,taskExecutor,dataSource,entityManagerFactory,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager,org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor#0]; root of factory hierarchy INFO [main] org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Shutting down ExecutorService 'taskExecutor' ERROR [main] org.springframework.web.context.ContextLoader - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1403) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:545) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:871) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:423) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:636) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:188) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:995) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:579) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:381) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:182) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:497) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:135) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:61) at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:436) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:349) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:306) at org.eclipse.jetty.util.Scanner.start(Scanner.java:242) at org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:136) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:562) at org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:212) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.Server.doStart(Server.java:209) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1018) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:983) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.jetty.start.Main.invokeMain(Main.java:447) at org.eclipse.jetty.start.Main.start(Main.java:605) at org.eclipse.jetty.start.Main.parseCommandLine(Main.java:238) at org.eclipse.jetty.start.Main.main(Main.java:77) Caused by: javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1093) at org.hibernate.ejb.Ejb3Configuration.addClassesToSessionFactory(Ejb3Configuration.java:871) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:758) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:425) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:131) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:225) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:308) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1460) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1400) ... 45 more Caused by: java.lang.ClassNotFoundException: WEB-INF.classes.org.gwtrpcspring.example.server.Person at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.hibernate.util.ReflectHelper.classForName(ReflectHelper.java:135) at org.hibernate.ejb.Ejb3Configuration.classForName(Ejb3Configuration.java:1009) at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1081) ... 53 more WARN [main] org.eclipse.jetty.util.log - Failed startup of context WebAppContext@149f041@149f041/gwtrpc-spring,file:/C:/Documents and Settings/Jewel2/Local Settings/Temp/Jetty_0_0_0_0_8080_gwtrpc.spring.war__gwtrpc.spring__az1wdj/webinf/;jar:file:/G:/_Java/_Jetty/jetty-distribution-7.1.2.v20100523/webapps/gwtrpc-spring.war!/;,G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\webapps\gwtrpc-spring.war org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1403) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:545) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:871) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:423) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:636) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:188) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:995) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:579) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:381) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:182) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:497) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:135) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:61) at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:436) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:349) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:306) at org.eclipse.jetty.util.Scanner.start(Scanner.java:242) at org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:136) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:562) at org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:212) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.Server.doStart(Server.java:209) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1018) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:983) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.jetty.start.Main.invokeMain(Main.java:447) at org.eclipse.jetty.start.Main.start(Main.java:605) at org.eclipse.jetty.start.Main.parseCommandLine(Main.java:238) at org.eclipse.jetty.start.Main.main(Main.java:77) Caused by: javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1093) at org.hibernate.ejb.Ejb3Configuration.addClassesToSessionFactory(Ejb3Configuration.java:871) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:758) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:425) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:131) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:225) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:308) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1460) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1400) ... 45 more Caused by: java.lang.ClassNotFoundException: WEB-INF.classes.org.gwtrpcspring.example.server.Person at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.hibernate.util.ReflectHelper.classForName(ReflectHelper.java:135) at org.hibernate.ejb.Ejb3Configuration.classForName(Ejb3Configuration.java:1009) at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1081) ... 53 more INFO [main] org.eclipse.jetty.util.log - Started [email protected]:8080 And this is my applicationContext.xml file <?xml version="1.0" encoding="UTF-8"?> <beans> <context:annotation-config /> <context:component-scan base-package="org.gwtrpcspring.example.server" /> <bean id="taskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor"> <property name="corePoolSize" value="5" /> <property name="maxPoolSize" value="10" /> <property name="queueCapacity" value="25" /> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost:3306/test" /> <property name="username" value="jewel" /> <property name="password" value="jewel" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="databasePlatform" value="org.hibernate.dialect.MySQLDialect" /> <!-- <property name="databasePlatform" value="org.hibernate.dialect.HSQLDialect" /> --> <property name="showSql" value="false" /> <property name="generateDdl" value="true" /> </bean> </property> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> </beans>

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >