Search Results

Search found 1695 results on 68 pages for 'jboss messaging'.

Page 56/68 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • The Kids Are Alright. With Facebook and SMS. But Not Twitter

    - by ultan o'broin
    I delivered a lecture to business and technology freshmen (late teens, I reckon) in Trinity College Dublin recently. I spoke about user experience in enterprise applications, trends that UX pros need to be aware of such as social media, community support, mobile and tablet platforms and a bunch of nuances around those areas (data and device security, privacy, reputation, branding, and so on). It was all fairly high level stuff given the audience, and I included lots of colorful screenshots. Irish-related examples helped to get the message across. During the lecture I did a quick poll. “How many students here use Twitter?” Answer: None. “How many use Facebook?” All (pretty much). So what do these guys like to use instead of Twitter? Easy - text messaging (or SMS if you like). They all had phones. Perhaps I should not have been so surprised about Twitter, but it’s always great to have research validated by some guerilla UX research on the street. There’s already quite a bit of research about teen uptake (or lack) of Twitter, telling us young adults don’t tweet. Twitter is seen as something for er, older people. Affordable devices and data plans that allow students to text really quickly are also popular (BlackBerry, for example). Younger people just luuurve to text each other. A lot.  Facebook versus Twitter for younger folks? Well, we know the story. No contest. I would love to engage more with students like these. I’ll plan for it. It will also be interesting to see if Twitter becomes more important to them over time. There were a few other interesting observations about the lack of uptake of Foursquare, Gowalla and mobile apps like that. I  don’t think there’s a huge uptake in these kind of apps in Ireland anyway, but maybe students have different priorities anyway?   I’ll return to that another day. Technorati Tags: Gowalla,FourSquare,Twitter,UX,user experience,user assistance,Trinity College Dublin

    Read the article

  • Vodacom Call Center Management on the NetBeans Platform

    - by Geertjan
    If you live in South Africa, you know about Vodacom. Vodacom is one of the dominant mobile communication companies in South Africa, and beyond, providing voice, messaging, data, and similar mobile services. Inside Vodacom there's an application named Helios, which is a call centre application that had its inception in 2009 and consists of two parts. Firstly, a web-based front-end that allows a call centre agent to service subscribers using a Google-like search on a knowledge base structured as a collection of FAQs. The web-based front-end uses plain-old HTML + CSS + a good helping of JQuery and JQueryUI. This is delivered via JSR-168 portlets running on a cluster of IBM Portal 6 servers. In turn, the portlets communicate via RMI with several back-end EJB's containing the business logic. These EJB's are deployed on a cluster of Weblogic Application Servers, version 10.3.6. The second part is a NetBeans Platform application used for maintaining and constructing the knowledge base, i.e., the back-end of the web-based front-end. Helios is also used for a number of other maintenance functions, such as access permissions, user maintenance, and news bulletins. Below, in the web-based front-end, call centre agents can enter search terms and are presented with a number of FAQs from the knowledge base. Upon selecting a FAQ article, the agent is presented with the article text, the process to guide the subscriber, system checks that display information specific to the subscriber, and links to related applications and articles: Below, you can see that applications are searchable and can be accessed using the same web-based front-end as shown above. And, as can be seen below, knowledge base FAQs are maintained using the Helios Maintenance Application, which is the Vodacom application built on the NetBeans Platform: Several thousand call centre agent user accounts are administered using the Helios Maintenance Application. Below the main FAQ page is shown, together with the About dialog: Vodacom is happy with the back-end NetBeans Platform application. However, the front-end stack runs on quite old technology. Ideally Vodacom would like to migrate the portlets to Oracle Weblogic Portal or Oracle WebCenter, but this hasn't been accomplished yet. Migrating makes sense as the rest of the application server environment consists entirely of Oracle products.

    Read the article

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • How to design a scalable notification system?

    - by Trent
    I need to write a notification system manager. Here is my requirements: I need to be able to send a Notification on different platforms, which may be totally different (for exemple, I need to be able to send either an SMS or an E-mail). Sometimes the notification may be the same for all recipients for a given platform, but sometimes it may be a notification per recipients (or several) per platform. Each notification can contain platform specific payload (for exemple an MMS can contains a sound or an image). The system need to be scalable, I need to be able to send a very large amount of notification without crashing either the application or the server. It is a two step process, first a customer may type a message and choose a platform to send to, and the notification(s) should be created to be processed either real-time either later. Then the system needs to send the notification to the platform provider. For now, I end up with some though but I don't know how scalable it will be or if it is a good design. I've though of the following objects (in a pseudo language): a generic Notification object: class Notification { String $message; Payload $payload; Collection<Recipient> $recipients; } The problem with the following objects is what if I've 1.000.000 recipients ? Even if the Recipient object is very small, it'll take too much memory. I could also create one Notification per recipient, but some platform providers requires me to send it in batch, meaning I need to define one Notification with several Recipients. Each created notification could be stored in a persistent storage like a DB or Redis. Would it be a good it to aggregate this later to make sure it is scalable? On the second step, I need to process this notification. But how could I distinguish the notification to the right platform provider? Should I use an object like MMSNotification extending an abstract Notification? or something like Notification.setType('MMS')? To allow to process a lot of notification at the same time, I think a messaging queue system like RabbitMQ may be the right tool. Is it? It would allow me to queue a lot of notification and have several worker to pop notification and process them. But what if I need to batch the recipients as seen above? Then I imagine a NotificationProcessor object for which I could I add NotificationHandler each NotificationHandler would be in charge to connect the platform provider and perform notification. I can also use an EventManager to allow pluggable behavior. Any feedbacks or ideas? Thanks for giving your time. Note: I'm used to work in PHP and it is likely the language of my choice.

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by J Swaroop
    Cross-posting Dain Hansen's excellent recap of the Big Data/Fast Data announcement during OOW: For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details.

    Read the article

  • Help for choosing a cost effective game server for Flash client

    - by Sapots Thomas
    I am developing a flash-based game primarily for desktops, to be hosted on facebook platform (like cityville, sims social etc). The gameplay doesn't involve real-time communication between players unlike an mmorpg. Here each player plays in his own world without any knowledge of other online players. I've written almost 95% of the game logic in actionscript on the client side. I used Smartfox Server pro on the server side (mostly used for getting data from the DB) and the entire server code is an extension written in java. I'm using json as the protocol for communication. Although I love smartfox server, as an indie, its tough for me to afford the unlimited users license. Morever its limited just to one machine. So I'm looking for an alternative to smartfox server now. The reason for choosing smartfox server earlier was to use the server properties supported by it. Server properties on smartfox server take advantage of the socket connection and are essentially server side objects in java which store some data for the player which he can change frequently during the game. And when he logs out of the game, the extension can write out the final state in the DB (I'm using MySQL). This significantly reduces the number of DB UPDATE/INSERT calls made during the game. I love the way this works since the data is secure as its on the server side and smartfox server is known to be scalable. (although I'm not sure whether this approach is used widely by gaming industry or not, since this is not an mmorpg, I'm putting all player in the lobby). So my question is whether any of the free and community supported servers like reddwarf, firebase, BlazeDS etc can provide a similar architecture so that I can use server properties without many code changes? EDIT : I am not insisting on the exact same feature (thats asking too much!), but atleast a viable messaging system on the server so that I can send actionscript objects from the client using json/binary so that its fast. OR maybe some completely different way to implement what I need here. Thanks in advance.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-22

    - by Bob Rhubart
    Guide to integration architecture | Stephanie Mann "The landscape of integration architecture is shifting as service-oriented and cloud-based architecture take the fore," says Stephanie Mann. "To ensure success, enterprise architects and developers are turning to lighter-weight infrastructure to support more complex integration projects." FY13 Oracle PartnerNetwork Kickoff - Tues June 26, 2012 Join us for a one-hour live online event hosted by the Oracle PartnerNetwork team as we kickoff FY13. Other dates/times for EMEA/LAD/JAPAN/APAC. Click the link for details. Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6? | Ricardo Ferreira Okay, you would expect an Oracle guy to make this argument. But Ferreira takes a very deep, very detailed technical dive into the issue. So hear the man out, will ya? Hibernate4 and Coherence | Rene van Wijk According to Oracle ACE Rene van Wijk, "there are two ways to integrate Hibernate and Coherence." In this post he illustrates one of them. Simple Made Easy | Rich Hickey Rich Hickey discusses simplicity, why it is important, how to achieve it in design and how to recognize its absence in the tools, language constructs and libraries in this presentation from QCon London 2012. Starting a cluster | Mark Nelson Fusion Middleware A-Team blogger Mark Nelson looks at Oracle SOA Suite, Oracle BPM, and Oracle Coherence, three products that are " commonly clustered, and which have somewhat different requirements." Why building SaaS well means giving up your servers | GigaOM The biggest benefit to PaaS, reports GigaOM's Derrick Harris, "might be a better product because the company is able to focus on building the app rather than managing servers." Personas - what, why & how | Mascha van Oosterhout "To be able to create a successful, user-friendly website or application," says Mascha van Oosterhout, "every decision you take, whether you are part of the marketing team, the design team or the development team, should be based on what you know about the user." Thought for the Day "Machines take me by surprise with great frequency." — Alan Turing(June 23, 1912 - June 7, 1954) Source: Brainy Quote

    Read the article

  • ArchBeat Link-o-Rama for December 6, 2012

    - by Bob Rhubart
    Above and Beyond with the A-Team Maybe it's the coffee… If you follow this blog you've probably noticed that I regularly feature posts from members of the Oracle Fusion Middleware Architecture team, otherwise known as the A-Team. One of those bloggers, someone identified only as "fip" who writes on the A-Team SOA blog, went above and beyond on Dec 4, publishing a total of four substantial technical posts in a single day, each one worth a look: Retrieve Performance Data from SOA Infrastructure Database Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics How to Achieve OC4J RMI Load Balancing Using BPEL Performance Statistics to Diagnose Performance Bottlenecks Web Service Example - Part 3: Asynchronous | The Oracle ADF Mobile Blog Part 3 in this series from the Oracle ADF Mobile blog looks at "firing the web service asynchronously and then filling in the UI when it completes." Denis says, "This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data." ADF Mobile - Implementing Reusable Mobile Architecture | Andrejus Baranovskis "Reusability was always a strong part of ADF," says Oracle ACE Director Andrejus Baranovskis. "The same high reusability level is supported now in ADF Mobile." The objective of this post is "to prove technically that [the] reusable architecture concept works for ADF Mobile." Basic is Best | Eric Stephens "The world we live in and enterprises we strive to transform with enterprise architecture are complicated organisms, much like the human body," says Oracle Enterprise Architect Eric Stephens. "But sometimes a simple solution is the best approach...Whatever level of abstraction you are working at, less is more." Selling Federal Enterprise Architecture | Ted McLaughlan "EA must be 'sold' directly to the communities that matter from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration, " explains Ted McLaughlan. And that's true for any organization. Avoiding the "I'm Spartacus" Scenario in SOA | Ben Wilcock "This ‘SOA Spartacus’ scenario usually occurs quite soon after SOA is articulated as the primary strategic direction of the programme," says Ben Wilcock, "but before the organisation’s SOA capability is mature enough to understand what is meant by SOA, and how it should be designed and delivered." In such cases, perhaps the "A" in SOA is missing, no? Thought for the Day "It makes me feel guilty that anybody should have such a good time doing what they are supposed to do." — Charles Eames (1907–1978) Source: SoftwareQuotes.com

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): http://www.amazon.com/Introducing-Microsoft-Pro-Developer-David-Platt/dp/0735619182/ref=sr_1_1?s=books&ie=UTF8&qid=1296339237&sr=1-1 Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: http://www.amazon.com/Head-First-2E-Real-World-Programming/dp/1449380344/ref=sr_1_1?ie=UTF8&qid=1296339176&sr=8-1 Microsoft Visual C# 2008 Step by Step : http://www.amazon.com/Microsoft-Visual-2008-Step/dp/0735624305/ref=sr_1_1?s=books&ie=UTF8&qid=1296339208&sr=1-1 c The first place to start is at the official site for the certification. That’s here: http://www.microsoft.com/learning/en/us/Exam.aspx?ID=70-583&Locale=en-us c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Links: My Notes:

    Read the article

  • Windows Azure Training Kit October 2012 Release

    - by Clint Edmonson
    The Windows Azure Technical Evangelism team have been busy bees lately and we want to share with you what they’ve been working on. As you know we release the Windows Azure Training Kit on a regular cadence, so I’m pleased to announce the Windows Azure Training Kit October 2012 Release. This update of the training kit includes 47 hands-on labs, 24 demos and 38 presentations designed to help you learn how to build applications that use Windows Azure services, including updated hands-on labs to use the latest version of Visual Studio 2012 and Windows 8, new demos and presentations. Essential Links: Windows Azure Training Kit Download Windows Azure Training Kit Github [Issues] Updated Presentations With Speaker Notes Your voices were heard loud and clear! I am excited to announce Speaker Notes have been added to a the majority of the content we have available. Find the new updated decks which contain speaker notes below: Foundation SQL Federation Virtual Machine Overview Virtual Networks Windows 8 and Windows Azure Web Sites Windows Azure Cloud Services Windows Azure Overview Windows Azure Service Bus Deploying Active Directory Building Apps With IaaS and PaaS Identity and Access Control Linux Virtual Machines Managing Virtual Machines PowerShell Migrating Apps and Workloads Scalable Global and Highly Available Apps Security and Identity SQL Database SQL Database Migration Cloud Service Life Cycle DevCamps Cloud Services iOS, Android and Windows Azure Windows 8 and Windows Azure Web Sites Windows 8 and Windows Azure Mobile Services Added Localized Content Due to the excitement in the community surrounding the mobile services launch, it was apparent that we needed to make localized content available to continue to deliver the exciting message around Windows Azure Mobile Services. Localized content is available in the following languages: French Japanese German Chinese (Taiwan) Spanish Italian Korean Portuguese (Brazilian) Russian Updated Hands-On Labs To support those who have upgraded to Visual Studio 2012 or those trying out the Visual Studio 2012 Express Editions, we have made sure that the content is available and supported (selected labs only) in Visual Studio 2012 Express and up. Visual Studio 2012 Windows Azure Traffic Manager Introduction to Cloud Services Service Bus Messaging Introduction to Access Control Service This adds a significant amount of additional content, so we have revamped the Hands-On Lab Navigation page to include subsections for Visual Studio 2012 Labs, Visual Studio 2010 Labs, Open Source Labs, Scenario Labs, All Labs. Added Demos Demos are available for a number of presentations which are available in Foundation, DevCamp, ITPro Event & Device + Service DevCamps. You can browse through the demos on the respective Demo Navigation page or on Github (links provided in Demo listing below). HelloASP Connecting Cloud Services Service Bus Relay Windows 8 and Mobile Services URL Shortener iOS Client Migrating a Web Farm Deploying Active Directory URL Shortener Service  (PHP) Geo-Location Service (PHP) Geo-Location Android Client Getting Started with VMs Load Balancing Availability Deploying Hybrid Apps Migrate VM AppController Geo-Location iOS Client Scale Up/Down Using CSUpload URL Shortener Android Client Imaging Virtual Machines The Windows Azure Training Kit is open source and available on GitHub, enabling you in the community to Report Issues or Fork and either extend the solution or commit bug fixes back to the Training Kit. You can find out more details about  the training kit from our GitHub Page including guidelines on how to commit back to the project. Stay tuned to my twitter feed for Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

  • ArchBeat Link-o-Rama for 11/17/2011

    - by Bob Rhubart
    Building an Infrastructure Cloud with Oracle VM for x86 + Enterprise Manager 12c | Richard Rotter Richard Rotter demonstrates "how easy it could be to build a cloud infrastructure with Oracle's solution for cloud computing." Article: Social + Lean = Agile | Dave Duggal In today’s increasingly dynamic business environment, organizations must continuously adapt to survive. Change management has become a major bottleneck. Organizations’ need a practical mechanism for managing controlled variance and change in-flight to break the logjam. This paper provides a foundation for applying lean and agile principles to achieve Enterprise Agility through social collaboration. Stress Testing Java EE 6 Applications - Free Article In Free Java Magazine : Adam Bien "It is strange," says Adam Bien, "everyone is obsessed about green bars and code coverage, but testing of multi threaded behavior is widely ignored - until the applications run into massive problems." Using Access Manager to Secure Applications Deployed on WebLogic | Rene van Wijk Another great how-to post from Oracle ACE Rene van Wijk, this time involving JBoss RichFaces, Facelets, Oracle Coherence, and Oracle WebLogic Server. DOAG 2011 vs. Devoxx - Value and Attraction | Markus Eisele Oracle ACE Director Markus Eisele compares and contrasts these popular conferences with the aim of helping others decide which to attend. SOA All the Time; Architects in AZ; Clearing Info Integration hurdles SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. Webcast: Oracle Business Intelligence Mobile Event Date: Wednesday, December 7, 2011 Time: 10 a.m. PT/1 p.m. ET Featuring Manan Goel (Director BI Product Marketing, Oracle) and Shailesh Shedge (Director BI and Analytics Practice, Ascentt). Webcast: Maximum Availability on Private Clouds A discussion of Oracle’s Maximum Availability Architecture, Oracle Database 11g, Oracle Exadata Database Machine, and Oracle Database appliance, featuring Margaret Hamburger (Director, Product Marketing, Oracle) and Joe Meeks (Director, Product Management, Oracle). November 30, 2011 at 10:00am PT / 1:00pm ET. Oracle Technology Network Architect Day - Phoenix, AZ Wednesday December 14, 2011, 8:30am - 5:00pm. The Ritz-Carlton, Phoenix, 2401 East Camelback Road, Phoenix, AZ 85016. Registration is free, but seating is limited.

    Read the article

  • NRF Big Show 2011 -- Part 2

    - by David Dorf
    One of the things I love about attending NRF is visiting the smaller booths to see what new innovative ideas have sprung up. After all, by watching emerging technologies we can get a sense of how the retail experience might change. After NRF I'm hoping to write a post on what I found, if anything, so be sure to check back. At the Oracle Retail booth we'll be demonstrating some of the aspects of the changing retail experience. These demos use a mix of GA and experimental components. Here are some highlights: 1. Checkin We wrote a consumer iPhone app we call Store Gateway that lets consumers access information from the store. They'll start by doing a checkin when they arrive that will alert the store manager via another iPhone app we wrote called Mobile Manager. Additionally, we display a welcome messaging using Starmount's digital sign. 2. Receive Offers There are three interaction points where a store can easily make an offer to a consumer: checkin, product scans, and checkout. For this demo we're calling our Universal Offer Engine at checkin to determine the best offer for this particular consumer. This offer is then displayed on the consumer's phone as well as on the digital sign. 3. Scan Products To thwart consumers from scanning product barcodes, we used Store Inventory Management to print QRCodes on shelf label then provided access to a scanner in the Store Gateway iphone app. When the consumer scans the shelf label they are shown product information provided by the retailer. 4. Checkout While we don't have a NFC-enabled mobile phone, we have a NFC chip that can attach to a phone. We're using this to checkout using a reader provided by ViVOTech. Tap the phone on the reader, and the POS accesses the customer#, coupons, and payment information. This really speeds the checkout process. 5. Digital Receipt After the transaction is complete, a digital copy of the receipt is sent to Intuit's QuickReceipts where consumers to store all their digital receipts. There's even an iPhone app that provides easy access to the receipts. This covers about half of what what we'll be showing, so be sure to stop by. I'll also be talking about how mobile is impacting the retail experience at the Wednesday morning session NRF Mobile Retail Initiative: a Blueprint for Action. See you at the Big Show!

    Read the article

  • SharePoint 2010 Hosting :: Sending SMS Alerts in SharePoint 2010 Over Office Mobile Service Protocol (OMS)

    - by mbridge
    In this post, I want to share the exciting news of SharePoint's 2010 new feature. Finally it's possible to send SMS directly from SharePoint to mobile phones. The advantages of sending SMS instead of Email messages are obvious: SMS alerts or reminders that are received on mobile phones are more preferred than Email messages that can be lost in the mass of spam. The interface is standard as it's very similar to previous versions of the product. Adjustments are easy to do, simply enter the address of the Office Mobile Service (OMS) web-service which you want to use for sending messages, then specify the connection parameters. Further details on Office Mobile Service is available below. The Test Service button checks if OMS web-service is accessible using provided URL (user name and password are not verified). This check is needed because OMS web-service URL depends on the mobile operator and country. It's now possible to select the method of sending alerts in alerts settings. Email option is selected by default. Alerts delivery method is displayed in the list of existing alerts. Office Mobile Service (OMS) SharePoint 2010 uses exterior servers similar to SMTP servers for sending SMS alerts. However, Microsoft started development and promotion of their own protocol instead of using existing ones. That is how Office Mobile Service (OMS) appeared. This open protocol enables clients to send text and multimedia messages (mobile messages) remotely to the server which processes these messages and delivers them to mobile phones.  Typical scenario of utilizing this protocol is data transfer between computer application and mobile phone. The recipient can answer messages and the server in return will deliver the answer by SMTP protocol, i.e. by email.  Key quality of this protocol is that it's built on base of HTPP(S) and SOAP protocols.     This means that in fact SMS gateway must support typified web-service. What do you get from web-service? What you get is the ability to send SMS from any platform you want.  The protocol is being developed at the moment and version 0.2 from 08/28/2009 was available when the article was published.  For promotion of their protocol and simplifying server search, Microsoft represented web-service http://messaging.office.microsoft.com/HostingProviders.aspx that helps to receive the list of providers, which supports OMS protocol and message delivery to your operator.  All you need to do is decide which provider to use, complete the agreement, then adjust the SharePoint connection parameters and start working.  Some providers advertise themselves not only for clients but for mobile operators as well. They offer automatic adding to the list of the Office Mobile Service Providers.  To view the full specifications of OMS, please go to http://msdn.microsoft.com/en-us/library/dd774103.aspx.

    Read the article

  • BizTalk Send Ports, Delivery Notification and ACK / NACK messages

    - by Robert Kokuti
    Recently I worked on an orchestration which sent messages out to a Send Port on a 'fire and forget' basis. The idea was that once the orchestration passed the message to the Messagebox, it was left to BizTalk to manage the sending process. Should the send operation fail, the Send Port got suspended, and the orchestration completed asynchronously, regardless of the Send Port success or failure. However, we still wanted to log the sending success, using the ACK / NACK messages. On normal ports, BizTalk generates ACK / NACK messages back to the Messagebox, if the logical port's Delivery Notification property is set to 'Transmitted'. Unfortunately, this setting also causes the orchestration to wait for the send port's result, and should the Send Port fail, the orchestration will also receive a 'DeliveryFailureException' exception. So we may end up with a suspended port and a suspended orchestration - not the outcome wanted here, there was no value in suspending the orchestration in our case. There are a couple of ways to fix this: 1. Catch the DeliveryFailureException  (full type name Microsoft.XLANGs.BaseTypes.DeliveryFailureException) and do nothing in the orchestration's exception block. Although this works, it still slows down the orchestration as the orchestration still has to wait for the outcome of the send port operation. 2. Use a Direct Port instead, and set the ACK request on the message Context, prior passing to the port: msgToSend(BTS.AckRequired) = true; This has to be done in an expression shape, as a Direct logical port does not have Delivery Notification property - make sure to add a reference to Microsoft.BizTalk.GlobalPropertySchemas. Setting this context value in the message will cause the messaging agent to create an appropriate ACK or NACK message after the port execution. The ACK / NACK messages can be caught and logged by dedicated Send Ports, filtering on BTS.AckType value (which is either ACK or NACK). ACK/NACK messages are treated in a special way by BizTalk, and a useful feature is that the original message's context values are copied to the ACK/NACK message context - these can be used for logging the right information. Other useful context properties of the ACK/NACK messages: -  BTS.AckSendPortName can be used to identify the original send port. - BTS.AckOwnerID, aka http://schemas.microsoft.com/BizTalk/2003/system-properties.AckOwnerID - holds the instance ID of the failed Send Port - can be used to resubmit / terminate the instance Someone may ask, can we just turn off the Delivery Notification on a 'normal' port, and set the AckRequired property on the message as for a Direct port. Unfortunately, this does not work - BizTalk seems to remove this property automatically, if the message goes through a port where Delivery Notification is set to None.

    Read the article

  • BizTalk 2010 Certification Exam

    - by Paul Petrov
    I took a shot at new (to me) certification exam for BizTalk 2010. I was able to pass it without any preparation just based on the experience. That does not mean this exam is a very simple one. Comparing to previous (2006 R2) it covers some new areas (like WCF) and has some demanding questions and situation to think about. But the most challenging factor is broad feature coverage. Overall, the impression that if BizTalk continues to grow in scope it’s better to create separate exams for core functionality and extended features (like EDI, RFID, LOB adapters) because it’s really hard to cover vast array of BizTalk capabilities. As far as required knowledge and questions allocation I think Microsoft description is on target. There were definitely more questions on deployment, configuration and administration aspects comparing to previous exam. WCF and WCF based adapters now play big role and this topic was covered well too. Extended functionality is claimed at 13% of the exam, I felt there were plenty of RFID questions but not many EDI, that’s why I thought it’d be useful to split exam into two to cover all of them equally. BRE is still there and good, cause it’s usually not very known/loved feature of the package. At the and, for those who plan to get certified, my advice would be to know all those areas of BizTalk for guaranteed passing: messaging and orchestrations, core adapters, routing, patterns; development of all artifacts and orchestrations; debugging and exceptions handling; packaging, deployment, tracking and administration; WCF bindings and adapters; BAM, BRE, RFID, EDI, etc. You may get by not knowing one smaller non-essential part (like I did with RFID, for example). In such case you better know all other areas very well to cover for the weak spot. If there more than one whiteouts in the knowledge it’s good idea to study and prepare: MSDN, blogs, virtual labs and good VM to play with can help when experience is not enough. So best wishes and good skill to you in passing this certification!

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

  • WebCenter Customer Spotlight: Hitachi Data Systems

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter Watch this Webcast to see a live demo on how HDS creates multilingual content for their 35+ regional websites  Solution SummaryHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. HDS is based in Santa Clara, California, and has over 5,300 employees in more then 100 countries and regions. HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. HDS implemented a global web content management system based on Oracle WebCenter Content and integrated the Lingotek translation management system to manage their multilingual content. The implemented solution provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Company OverviewHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. and part of the Hitachi Information Systems & Telecommunications Division. The company sells through direct and indirect channels in more than 170 countries and regions. Its customers include of 50 percent of the Fortune 100 companies. HDS is based in Santa Clara California and has over 5,300 employees in more than 100 countries and regions. Business ChallengesHDS has over 35 global websites and the lack of global web capabilities led to inconsistency of messaging, slower time to market and failed to address local language needs. There was an extensive operational overhead due to manual and redundant processes. Translation efforts where superficial, inconsistent and wasteful and the lack of translation automation tools discouraged localization.  HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. Solution DeployedHDS implemented a global web content management system based on Oracle WebCenter Content. The solution supports decentralized publishing for their 35+ global sites to address local market needs while ensuring editorial and brand review trough embedded review processes. They integrated the Lingotek translation management system into Oracle WebCenter Content to manage their multilingual content. Business Results Provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Enables end-to-end content lifecycle management across multiple languages Leverage translation memory for reuse and consistency Reduce time to market with central repository of translated content Additional Information HDS Webcast Oracle WebCenter Content Lingotek website

    Read the article

  • Application Integration Future &ndash; BizTalk Server &amp; Windows Azure (TechEd 2012)

    - by SURESH GIRIRAJAN
    I am really excited to see lot of new news around BizTalk in TechEd 2012. I was recently watching the session presented by Sriram and Rajesh on “Application Integration Futures: The Road Map and What's Next on Windows Azure”. It was great session and lot of interesting stuff about the feature updates for BizTalk and Azure integration. I have highlighted them below, definitely customers who haven’t started using Microsoft BizTalk ESB Toolkit should start using them which is going to be part of the core BizTalk product in future release, which is cool… BizTalk Server feature enhancements Manageability: ESB Tool Kit is going to be part of the core BizTalk product and Setup. Visualize BizTalk artifact dependencies in BizTalk administration console. HIS administration using configuration files. Performance: Improvements in ordered send port scenarios Improved performance in dynamic send ports and ESB, also to configure BizTalk host handler for dynamic send ports. Right now it runs under default host, which does not enable to scale. MLLP adapter enhancements and DB2 client transaction load balancing / client bulk insert. Platform Support: Visual Studio 2012, Windows 8 Server, SQL Server 2012, Office 15 and System Center 2012. B2B enhancements to support latest industry standards natively. Connectivity Improvements: Consume REST service directly in BizTalk SharePoint integration made easier. Improvements to SMTP adapter, to add macros for sending same email with different content to different parties. Connectivity to Azure Service Bus relay, queues and topics. DB2 client connectivity to SQL Server and SQL Server connectivity to Informix. CICS Http client connectivity to Windows. Azure: Use Azure as IaaS/PaaS for BizTalk environment. Use Azure to provision BizTalk environment for test environment / development. Later move to On-premises or build a Hybrid cloud approach. Eliminate HW procurement for BizTalk environment for testing / demos / development etc. Enable to create BizTalk farm easily and remove/add more servers as needed. EAI Service: EAI Bridge Protocol transformation Message Transformation Running custom code Message Enrichment Hybrid Connectivity LOB Applications On-premises Application On-premises Connectivity to Applications in the cloud Queues/ Topics Ftp Devices Web Services Bridge can be customized based on the service needs to provide different capabilities needed as part of the bridge. Look at the sample for EDI bridge for EDI service sample. Also with Tracking enabled through the portal. http://msdn.microsoft.com/en-us/library/windowsazure/hh674490 Adapters: Service Bus Messaging adapter - New adapter added. WebHttp adapter - For REST services. NetTcpRelay adapter - New adapter added. I will start posting more and once I start playing with this…

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    Last Updated: 02/01/2011 It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): link   Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: link Microsoft Visual C# 2008 Step by Step: link  c The first place to start is at the official site for the certification. link c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform: link Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform (2nd mention) Links: My Notes:

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • How to create a JMS durable subscriber in WebLogic Server?

    - by lmestre
    WebLogic Server Provides a set of examples that are very helpful to get started with Weblogic ServerHere you can check how to install the examples:http://docs.oracle.com/cd/E23943_01/doc.1111/e14142/prepare.htmAfter you have installed the examples, you can find the example you want to review, in this case TopicReceive, here:wlserver_10.3/samples/server/examples/src/examples/jms/topicTo review details of the specific example, you can open:wlserver_10.3/samples/server/examples/src/examples/jms/topic/instructions.htmlTo create a Durable Subscriber, you can just set the client ID  and invoke createDurableSubscriber instead of calling createSubscriber, i.e.:    tconFactory = (TopicConnectionFactory)       PortableRemoteObject.narrow(ctx.lookup(JMS_FACTORY),                                   TopicConnectionFactory.class);    tcon = tconFactory.createTopicConnection();    //Set Client ID for this Durable Subscriber    tcon.setClientID("GT2");    tsession = tcon.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);    topic = (Topic)       PortableRemoteObject.narrow(ctx.lookup(topicName),                                   Topic.class);    // Create Durable Subscription    tsubscriber = tsession.createDurableSubscriber(topic, "Test");    tsubscriber.setMessageListener(this);    tcon.start(); Enjoy!   You can read more about this here:http://docs.oracle.com/cd/E23943_01/web.1111/e13727/advpubsub.htm#CHDEBABChttp://docs.oracle.com/cd/E23943_01/web.1111/e13727/manage_apps.htm#i1097671    http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13943/WebLogic.Messaging.ISession.CreateDurableSubscriber_overload_2.html

    Read the article

  • How to Export a Contact to and Import a Contact from a vCard (.vcf) File in Outlook 2013

    - by Lori Kaufman
    vCard is the abbreviation for Virtual Business Card and is the standard format (.vcf files) for electronic business cards. vCards allow you to create and share contact information over the internet, such as in email messages and instant messaging. You can also use vCards to move contact information from one email or personal information management program to another, as long as both programs support the .vcf file format. vCards can contain name and address information, as well as phone numbers, email addresses, URLs, images, and audio clips. We will show you how to export a contact to and import a contact from a vCard, or .vcf file, in Outlook. First access the People section by clicking People at the bottom of the Outlook window. To view your contact in business card format, click Business Card in the Current View section of the Home tab. Select a contact by clicking on the name bar at the top of the business card. To export the selected contact as a vCard, click the File tab. On the Account Information screen, click Save As in the list of options on the left. The Save As dialog box displays. By default, the name of the contact is used to name the .vcf file in the File name edit box. Change the name, if desired, select a location for the file, and click Save. The contact is saved as a .vcf file. To import a vCard, or .vcf file, into Outlook, simply double-click on the .vcf file. By default, .vcf files are automatically associated with Outlook, so the file is opened in Outlook as a Contact. Make any changes or additions to the contact in the contact editing window. To save the contact, click Save & Close in the Actions section of the Contact tab. NOTE: Notice that because this contact is new, the full contact editing window displays rather than the Contact Card that displays when double-clicking on a contact. You can open the full contact editing window instead of the Contact Card when editing a contact or searching for a contact. The contact is added to the Contacts folder. You can add your contact information to a signature in business card format, and it will display as shown above in emails. We have covered how to create signatures and will be discussing more about signatures and business cards in Outlook.     

    Read the article

  • High traffic chat - how to check if there is new message and show it for all users

    - by user2633999
    I already had question about this but obviously it was not accepted very well, apparently too long when it's actually more information so you could have given me better answer. Ok, I will be much clearer now. Best possible logic to develop scalable chat in terms of stability, storing/reading messages on chat, updating chat on new message for all users etc.? I have most of this developed, the logic I think I miss is -- check if there is new message and show it for all users. I have this implemented but it crashes the site due to its traffic of 300k-400k people, so that's my main question. The chat is PHP based and uses Pusher (www.pusher.com) for instant messaging but it lacks what I need because it's more like a websocket. I'm using hardcoded files to keep messages (want to avoid database as much as possible). It's a no extension type of file, I'm sure you know. I'm getting crash with $fp = fopen(..., "w"); // pretend ... is the path and filename fwrite($fp, $msg); //hardcode the message fclose($fp); where $msg is the message itself. I'm having 1 file per message. I show last 150 messages = 150 file accesses and reads, yeah it's too much I guess. I have better logic now which I'm pursuing and that is 1 file with last 50-100 messages at all time. Sure it should be much better. How does it crash, that's the trickiest part because everything seems ordinary, believe me it is difficult to determine what exactly crashes the site, but in like 5 minutes when I try to open the site it's gone, then I put the old content without chat and is back online again. I'm having jquery post every 1 second to check if there is new message. I'm using timestamp in a special file where I keep the time last message was sent and if ((time() - time in file) <= 2) = reload last 150 messages including the last one. Too much input/output, write/read or however to say it I think is what crashes the site.

    Read the article

  • IPC in C under linux

    - by poly
    I'm building a messaging solution with the followingsetup: all the messages are saved on a DB, two or more reader processes will read from this DB and send data to other process(es) which will send it over the network. My approach is depicted below, The following have 4 sender process with 4 fifos, and 2 readers with 2 fifos reader0 ? read data from DB reader1 ? read data from DB sending part network_handler0 ? network_handler_fifo0 ? reader0 network_handler1 ? network_handler_fifo1 ? reader1 network_handler2 ? network_handler_fifo2 ? reader0 network_handler3 ? network_handler_fifo3 ? reader1 receiving part network_handler0 ? reader_fifo0 ? reader0 ? write to DB network_handler1 ? reader_fifo1 ? reader1 ? write to DB network_handler2 ? reader_fifo0 ? reader0 ? write to DB network_handler3 ? reader_fifo1 ? reader1 ? write to DB I have few problem with this setup, and please note that the number of processes could be more than that based on the environment, so I could make it 20 readers and 10 network_handlers or it it could as shown above. The size of the buffer is 64K and the message size is 200k, is this small enough to make the write/read to/from fifo atomic? How can make the processes aware of each other, so for example, reader 0 writes to network_handler_fifo0 and network_handler_fifo2, how can I make it start writing on other fifo if the current ones are full or their network_handlers are dea d I thought about making the reader process writing more general in writing, so for example it writes to all network fifos using lock mechanism and stop writing on the one that its process dead, I didn't use it as lock mechanism could slow thing down. BTW, each network_handler is an SCTP association, so network_handler0 is association 0, network_handler1 is association 1 and so on. Any idea is appreciated. I mean even if I have to change the setup above.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >