Search Results

Search found 22891 results on 916 pages for 'service layer'.

Page 192/916 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • How to access the Principal from a Java service object without using FlexContext?

    - by Marplesoft
    We're building some Java objects that are exposed via BlazeDS to our flex client application. So basically the BlazeDS messagebroker servlet instantiates and invokes methods on these objects in response to client requests. Works great. We're using app server-based authentication and have set up a security constraint on the <destination> elements in the remoting-config.xml file element to prevent unauthenticated clients from being able to access these remote java objects. Again, works fine. However, there are several places within the implementation of these java objects where we want to get the currently logged on user's username. Right now we are doing this via FlexContext.getUserPrincipal(), which gives access to this but we have a nagging concern that we don't like the idea that the implementation of these objects (the service layer) has a hard dependency on a BlazeDS class. But we're not sure how else to get access to this. The same applies to accessing the ServletContext and such. Any ideas?

    Read the article

  • How can I easily print multiple layers on multiple pages in Visio

    - by Mark Robinson
    We've created a flow chart using Visio that has multiple layers. (The background is that each layer represents variations on a basic process.) Now we want to be able to print each layer individually. Currently this involves lots of clicking to select the correct layer and and then press print - then repeating this for each of the 10 layers. Is there a simpler way? E.g. define each layer once and use a "print each layer" tool / macro?

    Read the article

  • Where do DQL statements live in an application that is using Zend Framework and Doctrine

    - by Dewayne
    In an application that is using Zend Framework 1.10 and Doctrine 1.2, where should the DQL statements live if our application is built such that it has a Service Layer and a Gateway(aka Doctrine_Table) layer. It seems that our possibilities include: 1) Placing the DQL statements in the Service layer which seems to be a bit too high in our application hierarchy to store DQL. 2) Placing the DQL statements within each model's Table/Gateway which seems a bit redundant because we also need to expose the DQL statements that do things such as getAllUsers() through the Service layer. Which of these is a preferable design? We intend to make use of the Service layer as much as possible so that other projects might consume various parts of our application.

    Read the article

  • Pros and Cons on where to place business logic: app level or DB

    - by Juri
    Hi, I always again encounter discussions about where to place the business logic: inside a business layer in the application code or down in the DB in terms of stored procedures. Personally I'd tend to the 1st approach, but I'd like to hear some opinions from your part first, without influencing you with my personal views. I know there doesn't exist a one-size-fits-all solution and it often depends on many factors, but we can discuss about that. Btw, we are in the context of web applications and our current approach is to have UI layer which accepts UI input and does a first, client-side validation Business layer with a number of service-classes which contains the business logic including validation for user input (server-side) Data Access Layer which calls stored procedures from the DB for doing persistency/read operations Many people however tend to move the business layer stuff (especially regarding the validation) down to the DB in terms of stored procedures. What do you think about it? I'd like to discuss.

    Read the article

  • How to transform (rotate) a already exist CALayer/animation?

    - by Flocked
    Hello, I have added a CALayer to the UIView of my app: CATransition *animation = [CATransition animation]; [animation setDelegate:self]; [animation setDuration:0.35]; [animation setTimingFunction:UIViewAnimationCurveEaseInOut]; animation.type = @"pageCurl"; animation.fillMode = kCAFillModeForwards; animation.endProgress = 0.58; [animation setRemovedOnCompletion:NO]; [[self.view layer] addAnimation:animation forKey:@"pageCurlAnimation"]; Now when the user rotates the device, the layer always stays in the same position on the screen (in line to the homescreen button). Is it possible to rotate the animation/layer? I tried self.view.layer.transform = CGAffineTransformMakeRotation (angle); but this code just rotates the UIView and not the animation/layer I set.

    Read the article

  • I have a question about URI templates in WCF Services

    - by Debby
    I have a web service with following operation contract and my service is hosted at http://localhost:9002/Service.svc/ [OperationContract] [WebGet(UriTemplate = "/Files/{Filepath}")] Stream DownloadFile(string Filepath); This web service would let users download file, if the proper filepath is provided (assuming, I somehow find out that proper filepath). Now, I can access this service from a browser by typing, http://localhost:9002/Service.svc/Files/(Filepath} If {filepath} is some simple string, its not a problem, but I want to send the location of the file. Lets us say users want to download file C:\Test.mp3 on the server. But how can I pass C:\Test.mp3 as {Filepath}? I get an error when I type http://localhost:9002/Service.svc/Files/C:\Test.mp3 in the browser. I am new to web services and find that this community is the quickest way to get answers to my questions.

    Read the article

  • Loose Coupling of Components

    - by David
    I have created a class library (assembly) that provides messaging, email and sms. This class library defines an interface IMessenger which the classes EmailMessage and SmsMessage both implement. I see this is a general library that would be part of my infrastructure layer and would / can be used across any development. Now, in my application layer I have a class that requires to use a messaging component, I obviously want to use the messaging library that I have created. Additionally, I will be using an IoC container (Spring.net) to allow me to inject my implementation i.e. either email or sms. Therefore, I want to program against an interface in my application layer class, do I then need to reference my message class library from my application layer class? Is this tightly coupling my application layer class to my message class library? Should I be defining the interface - IMessenger in a seperate library? Or should I be doing something else?

    Read the article

  • WCF Rest services for use with the repository pattern?

    - by mark smith
    Hi there, I am considering moving my Service Layer and my data layer (repository pattern) to a WCF Rest service. So basically i would have my software installed locally (WPF client) which would call the Service Layer that exists via a Rest Service... The service layer would then call my data layer using a WCF Rest Service also OR maybe just call it via the DLL assembly I was hoping to understand what the performance would be like. Currently I have my datalayer and servicelayer installed locally via DLL Assemblies locally on the pc. Also i presume the WCF REST services won't support method overloading hance the same name but with a different signature?? I would really appreciate any feedback anyone can give. Thanks

    Read the article

  • 3-Tier architecture-layering and the term-mishmash

    - by Rookian
    Hi! I am confused about the different possibilities to express a 3-Tier architecture. Data-Access-Layer Business-Layer Presentation Layer (User Interface) or Database (aka Backend) Business-Layer Presentation Layer (User Interface) Why can you skip the database in the 1st approach? Both use a database! Does the database belong to the layering or not?! What is wrong and what is right? Can someone of you clarify this :)? Thanks in advance!

    Read the article

  • Efficient way to render tile-based map in Java

    - by Lucius
    Some time ago I posted here because I was having some memory issues with a game I'm working on. That has been pretty much solved thanks to some suggestions here, so I decided to come back with another problem I'm having. Basically, I feel that too much of the CPU is being used when rendering the map. I have a Core i5-2500 processor and when running the game, the CPU usage is about 35% - and I can't accept that that's just how it has to be. This is how I'm going about rendering the map: I have the X and Y coordinates of the player, so I'm not drawing the whole map, just the visible portion of it; The number of visible tiles on screen varies according to the resolution chosen by the player (the CPU usage is 35% here when playing at a resolution of 1440x900); If the tile is "empty", I just skip drawing it (this didn't visibly lower the CPU usage, but reduced the drawing time in about 20ms); The map is composed of 5 layers - for more details; The tiles are 32x32 pixels; And just to be on the safe side, I'll post the code for drawing the game here, although it's as messy and unreadable as it can be T_T (I'll try to make it a little readable) private void drawGame(Graphics2D g2d){ //Width and Height of the visible portion of the map (not of the screen) int visionWidht = visibleCols * TILE_SIZE; int visionHeight = visibleRows * TILE_SIZE; //Since the map can be smaller than the screen, I center it just to be sure int xAdjust = (getWidth() - visionWidht) / 2; int yAdjust = (getHeight() - visionHeight) / 2; //This "deducedX" thing is to move the map a few pixels horizontally, since the player moves by pixels and not full tiles int playerDrawX = listOfCharacters.get(0).getX(); int deducedX = 0; if (listOfCharacters.get(0).currentCol() - visibleCols / 2 >= 0) { playerDrawX = visibleCols / 2 * TILE_SIZE; map_draw_col = listOfCharacters.get(0).currentCol() - visibleCols / 2; deducedX = listOfCharacters.get(0).getXCol(); } //"deducedY" is the same deal as "deducedX", but vertically int playerDrawY = listOfCharacters.get(0).getY(); int deducedY = 0; if (listOfCharacters.get(0).currentRow() - visibleRows / 2 >= 0) { playerDrawY = visibleRows / 2 * TILE_SIZE; map_draw_row = listOfCharacters.get(0).currentRow() - visibleRows / 2; deducedY = listOfCharacters.get(0).getYRow(); } int max_cols = visibleCols + map_draw_col; if (max_cols >= map.getCols()) { max_cols = map.getCols() - 1; deducedX = 0; map_draw_col = max_cols - visibleCols + 1; playerDrawX = listOfCharacters.get(0).getX() - map_draw_col * TILE_SIZE; } int max_rows = visibleRows + map_draw_row; if (max_rows >= map.getRows()) { max_rows = map.getRows() - 1; deducedY = 0; map_draw_row = max_rows - visibleRows + 1; playerDrawY = listOfCharacters.get(0).getY() - map_draw_row * TILE_SIZE; } //map_draw_row and map_draw_col representes the coordinate of the upper left tile on the screen //iterate through all the tiles on screen and draw them - this is what consumes most of the CPU for (int col = map_draw_col; col <= max_cols; col++) { for (int row = map_draw_row; row <= max_rows; row++) { Tile[] tiles = map.getTiles(col, row); for(int layer = 0; layer < tiles.length; layer++){ Tile currentTile = tiles[layer]; boolean shouldDraw = true; //I only draw the tile if it exists and is not empty (id=-1) if(currentTile != null && currentTile.getId() >= 0){ //The layers above 1 can be draw behing or infront of the player according to where it's standing if(layer > 1 && currentTile.getId() >= 0){ if(playerBehind(col, row, layer, listOfCharacters.get(0))){ behinds.get(0).add(new int[]{col, row}); //the tiles that are infront of the player wont be draw right now shouldDraw = false; } } if(shouldDraw){ g2d.drawImage( tiles[layer].getImage(), (col-map_draw_col)*TILE_SIZE - deducedX + xAdjust, (row-map_draw_row)*TILE_SIZE - deducedY + yAdjust, null); } } } } } } There's some more code in this method but nothing relevant to this question. Basically, the biggest problem is that I iterate over around 5000 tiles (in this specific resolution) 60 times each second. I thought about rendering the visible portion of the map once and storing it into a BufferedImage and when the player moved move the whole image the same amount but to the opposite side and then drawn the tiles that appeared on the screen, but if I do it like that, I wont be able to have animated tiles (at least I think). That being said, any suggestions?

    Read the article

  • JAXWS and sessions

    - by Pace
    I'm fairly new to writing web services. I'm working on a SOAP service using JAXWS. I'd like to be able to have users log-in and in my service know which user is issuing a command. In other words, have some session handling. One way I've seen to do this is to use cookies and access the HTTP layer from my web service. However, this puts a dependency on using HTTP as the transport layer (I'm aware HTTP is almost always the transport layer but I'm a purist). Is there a better approach which keeps the service layer unaware of the transport layer? Is there some way I can accomplish this with servlet filters? I'd like the answer to be as framework agnostic as possible.

    Read the article

  • error handling in asp.net

    - by user98454
    Hi How can i pass the different types of errors from Data access layer to presentation layer? suppose if we take the northwind database scenario I want to delete the customer, so i selected one customer in ui and clicked the "delete" button.It internally calls the "delete" in data access layer. The prerequisite for deleting the customer is that the customer doesn't have any orders.So in data access layer we wil check whether that customer has any orders.If the customer has orders how can we pass the message from dal to presentation layer that the customer has orders and we don't delete. Am i doing right?is there any other ways to deal with this type? Thanks in advance

    Read the article

  • How MVC (ASP.NET MVC) band 3-tier architecture can work together?

    - by Martin
    I am writing a design document and people on my team are willing to do the move from ASP.NET WebForm to ASP.NET MVC. This is great, but I have a hard time to understand how MVC workswith in a 3-tier (Data Layer, Business Layer and Presentation Layer) architecture. Can we say that the Model, View and Controller are part of the Presentation Layer? Is the Model part of the Business Layer? In brief, how MVC and 3-tier architecture can work together? Thanks for the help!

    Read the article

  • ORM on which standards i can select?

    - by just_name
    Q: This question is about how can i figure or select the convenient ORM to my web application. when beginning a new web application,What are the criteria on which i can consider a specific ORM is better than another one for my project or case(web application)? another part of my question : when i begin any web application i use three layers: the DB layer (which contains the connections , and handle the CRUD operations ) the Managers layer(the Data Access Layer) a class for each table on my db (loosely coupled with the previous layer )it contains the CRUD operations for the specific table and the other required operations. the interface layer.. and i use Object Data source.Is that considered as an ORM (as a concept) or I'm wrong in understanding this concept. note:I still a beginner in this field ,, and every day i learn more about web development. please i want explanation and suggestions for this point. Thanks in advance.

    Read the article

  • Losing scope for DataContex using LINQToSQL intermediately

    - by greektreat
    I am having a weird Situation with my DataConext. All My code is in C# I have a DLL project for my data access layer and business Layer which and Winforms project for my UI layer. My Data access Layer's Namespace is xxx.Data this is where have my xxx.dbml I also have xxx.Data.BusinessObjects name space of course for my business object in that project In my UI Layer I have these namespaces xxxApp(for Forms), xxxApp.Controls (For Controls) I have lost scope of the DataContext, it was accessible now when I do a Rebuild Solution I sometimes get compile errors saying for example: Error 34 'xxx.Data.xxxDataContext' does not contain a definition for 'SubmitChanges' and no extension method 'SubmitChanges' accepting a first argument of type 'xxx.Data.xxxDataContext' could be found (are you missing a using directive or an assembly reference?) Also intelisense doesn't recognize the methods and table classes from my xxxDataContext anymore I can access all object fine when I am in the DLL project but now in the Winforms project this is very strange. If anyone can help me out I would be extremely grateful!

    Read the article

  • CQRS, Commands, Javascript

    - by Tolu
    I have an architecture where a task-based UI passes commands to a service layer. Now, my intention is to implement the UI in javascript, using KendoUI and the service layer, domain layer etc. in .NET. I'm also looking at future mobile implementations of the client that may use say Java rather than Javascript. If I define the commands in .NET, I'd like to know how to use them from my Javascript client so the client can communicate the commands appropriately to service layer. Do I have to use something like Apache Thrift for this i.e. to define the commands at both the client and service layer?

    Read the article

  • DDD Infrastructure services

    - by Zygimantas
    Hello, I am learning DDD and I am a little bit lost in Infrastructure layer: As I understand, "all good DDD applications" should have 4 layers: Presentation, Application, Domain and Infrastructure. Database should be accessed using Repositories. Repository interfaces should be in Domain layer and repository implementation - in Infrastructure (reference http://stackoverflow.com/questions/693221/ddd-where-to-keep-domain-interfaces-the-infrastructure). Application, Domain and Infrastructure layer should/may have services (reference www.lostechies.com/blogs/jimmy_bogard/archive/2008/08/21/services-in-domain-driven-design.aspx), in example EmailService in Infrastructure layer which sends Email messages. BUT, inside Infrastructure layer we have repository implementations, which are used to access database. So, in this case, repositories are database services? What is the difference between Infrastructure service and repository? Thanks in advance!

    Read the article

  • What to do with lookup entities selected from drop down select ? How to send them to the service lay

    - by arrages
    I am developing a spring mvc based application. I have a simple pojo form object, the problem is that many properties will be taked from drop down lists that are populated from lookup entities, so I return the entity ID to the form object. public NewCarRequestForm { private makeId; // this are selected from a drop down. private modelId; } Should I just send this lookup entity ID to the service layer? or should I validated that this ID is correct (Somebody can send any random ID through the request) before and how? Now there is a problem if I want to validated something based on some property of the lookup entity. Do I perform a database lookup of the entity just to perform the validation? thanks.

    Read the article

  • CALayer not obeying object ownership rules?

    - by eaigner
    I have a custom CALayer - UIProgressLayer - for simulating a progress bar. There is a dispatched GC timer involved to animate its intermediate state. So thats the layer I'm talking about here, although i don't think the problem is restricted to this particular subclass because a CALayer with only the -release and -dealloc overridden produces the same outcome. The problem is when i send this layer, for instance, a theLayer.opacity = 0.5f a -release message is sent to the layer, thus deallocating my layer. Why is this happening? Has it something to do with how the whole CA system works? But it still has to obey the object ownership rules right? I was thinking maybe it creates a copy of that layer for the fading, but that's not the case.

    Read the article

  • R software : How to extract values from rasterstack with xy coordinates?

    - by Eddie
    I have a rasterstack(5 raster layers) that actually is a time series raster. r <- raster(nrow=20, ncol=200) s <- stack( sapply(1:5, function(i) setValues(r, rnorm(ncell(r), i, 3) )) ) > s class : RasterStack dimensions : 20, 200, 4000, 5 (nrow, ncol, ncell, nlayers) resolution : 1.8, 9 (x, y) extent : -180, 180, -90, 90 (xmin, xmax, ymin, ymax) coord. ref. : +proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0 names : layer.1, layer.2, layer.3, layer.4, layer.5 min values : -9.012146, -9.165947, -9.707269, -7.829763, -5.332007 max values : 11.32811, 11.97328, 15.99459, 15.66769, 16.72236 My objective is to plot each pixel and explore their behavior over time. How could I extract each pixels together with their x,y coordinates and plot a time series curve?

    Read the article

  • IE8 is still caching my requests even with mathrandom.

    - by Ozaki
    TLDR IE is still caching my requests even with Math.random() included in the URL. So I added math random onto the end of my url: var MYKMLURL = 'http://' + host + 'data/pattern?key='+ Math.random(); I also added math random onto my function param: window.setTimeout(RefreshPatternData, 1000, MYKMLLAYER); function RefreshPatternData(layer) { layer.loaded = false; layer.setVisibility(true); layer.refresh({ force: true, params: { 'key': Math.random()} }); setTimeout(RefreshPatternData, 30000, MYKMLLAYER); } So the request appears as http://host/data/pattern?key=35678652545 etc. It changes everytime the request is made. It works in Firefox & Chrome & Safari etc. But IE8 is still caching the data and not updating my layer. Any ideas as to why this might be occuring?

    Read the article

  • Remote Desktop to Your Azure Virtual Machine

    - by Shaun
    The Windows Azure Team had just published their new development portal this week and the SDK 1.3. Within this new release there are a lot of cool feature available. The one I’m looking forward to is Remote Desktop Access to your running Windows Azure Virtual Machine.   Configuration Remote Desktop Access It would be very simple to make the azure service enable the remote desktop access. First of all let’s create a new windows azure project from the Visual Studio. In this example I just created a normal MVC 2 web role without any modifications. Then we right-click the azure project node in the solution explorer window and select “Publish”. Then let’s select the “Deploy your Windows Azure project to Windows Azure” on the top radio button. And then select the credential, deployment service/slot, storage and label as susal. You must have the Management API Certificates uploaded to your Windows Azure account, and install the certification on you machine before in order to use this one-click deployment feature. If you are familiar with this dialog you will notice that there’s a linkage named “Configure Remote Desktop connections”. Here is where you need to make this service enable the remote desktop feature. After clicked this link we will set the configuration of the remote desktop access authorization information. There are 4 steps we need to do to configure our access. Certificates: We need either create or select a certificate file in order to encypt the access cerdenticals. In this example I will use the certificate file for my Management API. Username: The remote desktop user name to access the virtual machine. Password: The password for the access. Expiration: The access cerdentals would be expired after 1 month by default but we can amend here. After that we clicked the OK button to back to the publish dialog.   The next step is to back to the new windows azure portal and navigate to the hosted services list. I created a new hosted service and upload the certificate file onto this service. The user name and password access to the azure machine must be encrypted from the local machine, and then send to the windows azure platform, then decrypted on the azure side by the same file. This is why we need to upload the certificate file onto azure. We navigated to the “Hosted Services, Storage Accounts & CDN"” from the left panel and created a new hosted service named “SDK13” and selected the “Certificates” node. Then we clicked the “Add Certificates” button. Then we select the local certificate file and the password to install it into this azure service.   The final step would be back to our Visual Studio and in the pulish dialog just click the OK button. The Visual Studio will upload our package and the configuration into our service with the remote desktop settings.   Remote Desktop Access to Azure Virtual Machine All things had been done, let’s have a look back on the Windows Azure Development Portal. If I selected the web role that I had just published we can see on the toolbar there’s a section named “Remote Access”. In this section the Enable checkbox had been checked which means this role has the Remote Desktop Access feature enabled. If we want to modify the access cerdentals we can simply click the Configure button. Then we can update the user name, password, certificates and the expiration date.   Let’s select the instance node under the web role. In this case I just created one instance for demo. We can see that when we selected the instance node, the Connect button turned enabled. After clicked this button there will be a RDP file downloaded. This is a Remote Desctop configuration file that we can use to access to our azure virtual machine. Let’s download it to our local machine and execute. We input the user name and password we specified when we published our application to azure and then click OK. There might be some certificates warning dislog appeared. This is because the certificates we use to encryption is not signed by a trusted provider. Just select OK in these cases as we know the certificate is safty to us. Finally, the virtual machine of Windows Azure appeared.   A Quick Look into the Azure Virtual Machine Let’s just have a very quick look into our virtual machine. There are 3 disks available for us: C, D and E. Disk C: Store the local resource, diagnosis information, etc. Disk D: System disk which contains the OS, IIS, .NET Frameworks, etc. Disk E: Sotre our application code. The IIS which hosting our webiste on Azure. The IP configuration of the azure virtual machine.   Summary In this post I covered one of the new feature of the Azure SDK 1.3 – Remote Desktop Access. We can set the access per service and all of the instances of this service could be accessed through the remote desktop tool. With this feature we can deep into the virtual machines of our instances to see the inner information such as the system event, IIS log, system information, etc. But we should pay attention to modify the system settings. 2 reasons from what I know for now: 1. If we have more than one instances against our service we should ensure that all system settings we modifed are applied to all instances/virtual machines. Otherwise, as the machines are under the azure load balance proxy our application process may doesn’t work due to the defferent settings between the instances. 2. When the virtual machine encounted some problem and need to be translated to another physical machine all settings we made would be disappeared.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to pass XML to DB using XMLTYPE

    - by James Taylor
    Probably not a common use case but I have seen it pop up from time to time. The question how do I pass XML from a queue or web service and insert it into a DB table using XMLTYPE.In this example I create a basic table with the field PAYLOAD of type XMLTYPE. I then take the full XML payload of the web service and insert it into that database for auditing purposes.I use SOA Suite 11.1.1.2 using composite and mediator to link the web service with the DB adapter.1. Insert Database Objects Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} --Create XML_EXAMPLE_TBL Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} CREATE TABLE XML_EXAMPLE_TBL (PAYLOAD XMLTYPE); Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} --Create procedure LOAD_TEST_XML Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} CREATE or REPLACE PROCEDURE load_test_xml (xmlFile in CLOB) IS   BEGIN     INSERT INTO xml_example_tbl (payload) VALUES (XMLTYPE(xmlFile));   --Handle the exceptions EXCEPTION   WHEN OTHERS THEN     raise_application_error(-20101, 'Exception occurred in loadPurchaseOrder procedure :'||SQLERRM || ' **** ' || xmlFile ); END load_test_xml; / 2. Creating New SOA Project TestXMLTYPE in JDeveloperIn JDeveloper either create a new Application or open an existing Application you want to put this work.Under File -> New -> SOA Tier -> SOA Project   Provide a name for the Project, e.g. TestXMLType Choose Empty Composite When selected Empty Composite click Finish.3. Create Database Connection to Stored ProcedureA Blank composite will be displayed. From the Component Palette drag a Database Adapter to the  External References panel. and configure the Database Adapter Wizard to connect to the DB procedure created above.Provide a service name InsertXML Select a Database connection where you installed the table and procedure above. If it doesn't exist create a new one. Select Call a Stored Procedure or Function then click NextChoose the schema you installed your Procedure in step 1 and query for the LOAD_TEST_XML procedure.Click Next for the remaining screens until you get to the end, then click Finish to complete the database adapter wizard.4. Create the Web Service InterfaceDownload this sample schema that will be used as the input for the web service. It does not matter what schema you use this solution will work with any. Feel free to use your own if required. singleString.xsd Drag from the component palette the Web Service to the Exposed Services panel on the component.Provide a name InvokeXMLLoad for the service, and click the cog icon.Click the magnify glass for the URL to browse to the location where you downloaded the xml schema above.  Import the schema file by selecting the import schema iconBrowse to the location to where you downloaded the singleString.xsd above.Click OK for the Import Schema File, then select the singleString node of the imported schema.Accept all the defaults until you get back to the Web Service wizard screen. The click OK. This step has created a WSDL based on the schema we downloaded earlier.Your composite should now look something like this now.5. Create the Mediator Routing Rules Drag a Mediator component into the middle of the Composite called ComponentsGive the name of Route, and accept the defaultsLink the services up to the Mediator by connecting the reference points so your Composite looks like this.6. Perform Translations between Web Service and the Database Adapter.From the Composite double click the Route Mediator to show the Map Plan. Select the transformation icon to create the XSLT translation file.Choose Create New Mapper File and accept the defaults.From the Component Palette drag the get-content-as-string component into the middle of the translation file.Your translation file should look something like thisNow we need to map the root element of the source 'singleString' to the XMLTYPE of the database adapter, applying the function get-content-as-string.To do this drag the element singleString to the left side of the function get-content-as-string and drag the right side of the get-content-as-string to the XMLFILE element of the database adapter so the mapping looks like this. You have now completed the SOA Component you can now save your work, deploy and test.When you deploy I have assumed that you have the correct database configurations in the WebLogic Console based on the connection you setup connecting to the Stored Procedure. 7. Testing the ApplicationOpen Enterprise Manager and navigate to the TestXMLTYPE Composite and click the Test button. Load some dummy variables in the Input Arguments and click the 'Test Web Service' buttonOnce completed you can run a SQL statement to check the install. In this instance I have just used JDeveloper and opened a SQL WorksheetSQL Statement Normal 0 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} select * from xml_example_tbl; Result, you should see the full payload in the result.

    Read the article

  • Announcing Windows Azure Mobile Services

    - by ScottGu
    I’m excited to announce a new capability we are adding to Windows Azure today: Windows Azure Mobile Services Windows Azure Mobile Services makes it incredibly easy to connect a scalable cloud backend to your client and mobile applications.  It allows you to easily store structured data in the cloud that can span both devices and users, integrate it with user authentication, as well as send out updates to clients via push notifications. Today’s release enables you to add these capabilities to any Windows 8 app in literally minutes, and provides a super productive way for you to quickly build out your app ideas.  We’ll also be adding support to enable these same scenarios for Windows Phone, iOS, and Android devices soon. Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services.  Or watch this video of me showing how to do it step by step. Getting Started If you don’t already have a Windows Azure account, you can sign up for a no-obligation Free Trial.  Once you are signed-up, click the “preview features” section under the “account” tab of the www.windowsazure.com website and enable your account to support the “Mobile Services” preview.   Instructions on how to enable this can be found here. Once you have the mobile services preview enabled, log into the Windows Azure Portal, click the “New” button and choose the new “Mobile Services” icon to create your first mobile backend.  Once created, you’ll see a quick-start page like below with instructions on how to connect your mobile service to an existing Windows 8 client app you have already started working on, or how to create and connect a brand-new Windows 8 client app with it: Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app  that stores data in Windows Azure. Storing Data in the Cloud Storing data in the cloud with Windows Azure Mobile Services is incredibly easy.  When you create a Windows Azure Mobile Service, we automatically associate it with a SQL Database inside Windows Azure.  The Windows Azure Mobile Service backend then provides built-in support for enabling remote apps to securely store and retrieve data from it (using secure REST end-points utilizing a JSON-based ODATA format) – without you having to write or deploy any custom server code.  Built-in management support is provided within the Windows Azure portal for creating new tables, browsing data, setting indexes, and controlling access permissions. This makes it incredibly easy to connect client applications to the cloud, and enables client developers who don’t have a server-code background to be productive from the very beginning.  They can instead focus on building the client app experience, and leverage Windows Azure Mobile Services to provide the cloud backend services they require.  Below is an example of client-side Windows 8 C#/XAML code that could be used to query data from a Windows Azure Mobile Service.  Client-side C# developers can write queries like this using LINQ and strongly typed POCO objects, which are then translated into HTTP REST queries that run against a Windows Azure Mobile Service.   Developers don’t have to write or deploy any custom server-side code in order to enable client-side code below to execute and asynchronously populate their client UI: Because Mobile Services is part of Windows Azure, developers can later choose to augment or extend their initial solution and add custom server functionality and more advanced logic if they want.  This provides maximum flexibility, and enables developers to grow and extend their solutions to meet any needs. User Authentication and Push Notifications Windows Azure Mobile Services also make it incredibly easy to integrate user authentication/authorization and push notifications within your applications.  You can use these capabilities to enable authentication and fine grain access control permissions to the data you store in the cloud, as well as to trigger push notifications to users/devices when the data changes.  Windows Azure Mobile Services supports the concept of “server scripts” (small chunks of server-side script that executes in response to actions) that make it really easy to enable these scenarios. Below are some tutorials that walkthrough common authentication/authorization/push scenarios you can do with Windows Azure Mobile Services and Windows 8 apps: Enabling User Authentication Authorizing Users  Get Started with Push Notifications Push Notifications to multiple Users Manage and Monitor your Mobile Service Just like with every other service in Windows Azure, you can monitor usage and metrics of your mobile service backend using the “Dashboard” tab within the Windows Azure Portal. The dashboard tab provides a built-in monitoring view of the API calls, Bandwidth, and server CPU cycles of your Windows Azure Mobile Service.   You can also use the “Logs” tab within the portal to review error messages.  This makes it easy to monitor and track how your application is doing. Scale Up as Your Business Grows Windows Azure Mobile Services now allows every Windows Azure customer to create and run up to 10 Mobile Services in a free, shared/multi-tenant hosting environment (where your mobile backend will be one of multiple apps running on a shared set of server resources).  This provides an easy way to get started on projects at no cost beyond the database you connect your Windows Azure Mobile Service to (note: each Windows Azure free trial account also includes a 1GB SQL Database that you can use with any number of apps or Windows Azure Mobile Services). If your client application becomes popular, you can click the “Scale” tab of your Mobile Service and switch from “Shared” to “Reserved” mode.  Doing so allows you to isolate your apps so that you are the only customer within a virtual machine.  This allows you to elastically scale the amount of resources your apps use – allowing you to scale-up (or scale-down) your capacity as your traffic grows: With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need.  This enables a super flexible model that is ideal for new mobile app scenarios, as well as startups who are just getting going.  Summary I’ve only scratched the surface of what you can do with Windows Azure Mobile Services – there are a lot more features to explore.  With Windows Azure Mobile Services you’ll be able to build mobile app experiences faster than ever, and enable even better user experiences – by connecting your client apps to the cloud. Visit the Windows Azure Mobile Services development center to learn more, and build your first Windows 8 app connected with Windows Azure today.  And read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >