Search Results

Search found 31200 results on 1248 pages for 'field service'.

Page 40/1248 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • VirtualBox appliance for the Oracle Communications Service Delivery Platform (SDP) Products

    - by chlander
    It's been quite awhile since we last blogged. This blog is written by Leif Lourie, a Curriculum Developer for the Oracle Communications Service Delivery Platform (SDP) products. For the last 8 years, Leif has worked as a Curriculum Developer for many of the telecom-oriented products that Oracle offers. He has been working in the telecom industry for about 25 years and has also worked as a software developer, project manager, and solutions architect. He is currently working on courseware for an upcoming release for one of the Service Delivery Platform products. Thanks to Leif not only for this blog, but for making the VM described in the blog available. Cheryl Lander, Oracle Communications InfoDev Senior Director To be able to download, install and test a product within a day is many times very important for people that are doing the primary evaluation of a software product. If it takes longer, it will require a bigger effort, like a proof-of-concept project with many people involved. Of course, if the product is chosen for a more thorough test, it will probably happen anyway, but then maybe with focus on integration instead of product features. We have a long tradition of creating complex software that is easy to install and test and we have often been praised for the ease of getting our products up and running. One key for this has been that there has always been an installer for Windows, as well as for the production environments that usually are Unix and Linux. And, the windows installer has, in most cases, been released for developing and testing purposes. Lately, this has changed. Our products are very seldom released for the Windows platform, at all. And even the Linux versions are almost always released for 64-bit systems. This is creating problems for many of the people that want to try out our products, since few have access to a 64-bit Linux system of the right platform. Most of us are using a laptop with Windows or Mac OS. Some of us are using Linux or Solaris, but probably a non certified distribution for the product you want to test. My job, among other things, is to develop hands-on practices for our products. For me, it is crucial to have access to environments for installing and using our products. For this reason I have been using virtual machines for many years.I have a ready-made base system, with the necessary tools installed for all the products I create hands-on practices for. Whenever I start working on hands-on practices for a new product or a new version, I just copy the base system and start working with a clean slate. This saves me a lot of time! Now, I would like to start saving time for my favorite student: You! If you are using our products and regularly test new versions you might benefit from the virtual machine that is now available on Oracle Technology Network: The Virtual Machine for the Oracle Communications Service Delivery Platform (SDP) Products. This virtual machine contains an installation of the 64-bit version of Oracle Enterprise Linux, version 6. It also has Oracle Database Express Edition (XE), Oracle Java and Oracle Enterprise Pack for Eclipse installed. By using Oracle VM VirtualBox you may use Windows, OS X, Linux or Solaris on your laptop. VirtualBox can be installed on top of any of these platforms and give you the ability to run virtual machines in your laptop. After downloading and starting the virtual machine you will also need to download the installation files for the product you want to test; for example Oracle Communications Services Gatekeeper or Oracle Communications Online Mediation Controller. In some cases there are lessons and practices available for the products. The freely available courses are listed in Oracle Learning Library as a Collection of Oracle Communications Service Delivery Platform Courses. As time goes by, we will make this list collection bigger. Also, the goal is to update the virtual machine about one to two times per year. So you will always be able to get a well maintained virtual machine for the Service Delivery Platform products from us. We Value Your Feedback If you would like to suggest improvements or report issues on any of the product documentation, curriculum, or training produced by the Oracle Communications Information Development team, you can use these channels: Email [email protected]. Post a comment on this blog. Thanks for reading!

    Read the article

  • How do I pass a custom field to a hook (Invision Power Board [ipb] / PHP)

    - by Julian Young
    A long shot but here's hoping someone has some experience coding PHP hooks for Invisions Power Board forum. I'm attempting to code a status addition and the PHP works fine on it's own, it's the passing of the IPB's reference to my hook that is the issue. I.E. You setup a custom field in your forum for MSN Username, then from within a skin / template hook you pass the custom field to the hook and then use your PHP code to check on the status. Here is the IPB skin code I am hooking into on Global-userInfoPane... <if test="authorcfields:|:$author['custom_fields'] != """> <foreach loop="customFieldsOuter:$author['custom_fields'] as $group => $data"> <foreach loop="customFields:$author['custom_fields'][ $group ] as $field"> <if test="$field != ''"> <li> {$field} </li> </if> </foreach> </foreach> </if> Although I could easily add my own skin hook here. i.e. <if test="myHookHere:|:1===1"></if> Literally all I need is a single custom field entry from here passed to my hook. If I query every member when the hook is run then that will result in many extra sql queries per page view. All I want to do is pass that specific custom field to the hook... i.e. myHookHere( $customfield['msn_username'] ) Is this possible? How do you reference the customfield? Can I execute pure PHP from here? Appreciate anyone that can help! I tried the official invision forums but not had much luck.

    Read the article

  • Variable field in a constraint annotation

    - by Javi
    Hello, I need to create a custom constraint annotation which can access the value of another field of my bean. I'll use this annotation to validate the field because it depends on the value of the other but the way I define it the compiler says "The value for annotation attribute" of my field "must be a constant expression". I've defined it in this way: @Target(ElementType.FIELD) @Retention(RetentionPolicy.RUNTIME) @Constraint(validatedBy=EqualsFieldValidator.class) @Documented public @interface EqualsField { public String field(); String message() default "{com.myCom.annotations.EqualsField.message}"; Class<?>[] groups() default {}; Class<? extends Payload>[] payload() default {}; } public class EqualsFieldValidator implements ConstraintValidator<EqualsField, String>{ private EqualsField equalsField; @Override public void initialize(EqualsField equalsField) { this.equalsField = equalsField; } @Override public boolean isValid(String thisField, ConstraintValidatorContext arg1) { //my validation } } and in my bean I want something like this: public class MyBean{ private String field1; @EqualsField(field=field1) private String field2; } Is there any way to define the annotation so the field value can be a variable? Thanks

    Read the article

  • Transactional Messaging in the Windows Azure Service Bus

    - by Alan Smith
    Introduction I’m currently working on broadening the content in the Windows Azure Service Bus Developer Guide. One of the features I have been looking at over the past week is the support for transactional messaging. When using the direct programming model and the WCF interface some, but not all, messaging operations can participate in transactions. This allows developers to improve the reliability of messaging systems. There are some limitations in the transactional model, transactions can only include one top level messaging entity (such as a queue or topic, subscriptions are no top level entities), and transactions cannot include other systems, such as databases. As the transaction model is currently not well documented I have had to figure out how things work through experimentation, with some help from the development team to confirm any questions I had. Hopefully I’ve got the content mostly correct, I will update the content in the e-book if I find any errors or improvements that can be made (any feedback would be very welcome). I’ve not had a chance to look into the code for transactions and asynchronous operations, maybe that would make a nice challenge lab for my Windows Azure Service Bus course. Transactional Messaging Messaging entities in the Windows Azure Service Bus provide support for participation in transactions. This allows developers to perform several messaging operations within a transactional scope, and ensure that all the actions are committed or, if there is a failure, none of the actions are committed. There are a number of scenarios where the use of transactions can increase the reliability of messaging systems. Using TransactionScope In .NET the TransactionScope class can be used to perform a series of actions in a transaction. The using declaration is typically used de define the scope of the transaction. Any transactional operations that are contained within the scope can be committed by calling the Complete method. If the Complete method is not called, any transactional methods in the scope will not commit.   // Create a transactional scope. using (TransactionScope scope = new TransactionScope()) {     // Do something.       // Do something else.       // Commit the transaction.     scope.Complete(); }     In order for methods to participate in the transaction, they must provide support for transactional operations. Database and message queue operations typically provide support for transactions. Transactions in Brokered Messaging Transaction support in Service Bus Brokered Messaging allows message operations to be performed within a transactional scope; however there are some limitations around what operations can be performed within the transaction. In the current release, only one top level messaging entity, such as a queue or topic can participate in a transaction, and the transaction cannot include any other transaction resource managers, making transactions spanning a messaging entity and a database not possible. When sending messages, the send operations can participate in a transaction allowing multiple messages to be sent within a transactional scope. This allows for “all or nothing” delivery of a series of messages to a single queue or topic. When receiving messages, messages that are received in the peek-lock receive mode can be completed, deadlettered or deferred within a transactional scope. In the current release the Abandon method will not participate in a transaction. The same restrictions of only one top level messaging entity applies here, so the Complete method can be called transitionally on messages received from the same queue, or messages received from one or more subscriptions in the same topic. Sending Multiple Messages in a Transaction A transactional scope can be used to send multiple messages to a queue or topic. This will ensure that all the messages will be enqueued or, if the transaction fails to commit, no messages will be enqueued.     An example of the code used to send 10 messages to a queue as a single transaction from a console application is shown below.   QueueClient queueClient = messagingFactory.CreateQueueClient(Queue1);   Console.Write("Sending");   // Create a transaction scope. using (TransactionScope scope = new TransactionScope()) {     for (int i = 0; i < 10; i++)     {         // Send a message         BrokeredMessage msg = new BrokeredMessage("Message: " + i);         queueClient.Send(msg);         Console.Write(".");     }     Console.WriteLine("Done!");     Console.WriteLine();       // Should we commit the transaction?     Console.WriteLine("Commit send 10 messages? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     } } Console.WriteLine(); messagingFactory.Close();     The transaction scope is used to wrap the sending of 10 messages. Once the messages have been sent the user has the option to either commit the transaction or abandon the transaction. If the user enters “yes”, the Complete method is called on the scope, which will commit the transaction and result in the messages being enqueued. If the user enters anything other than “yes”, the transaction will not commit, and the messages will not be enqueued. Receiving Multiple Messages in a Transaction The receiving of multiple messages is another scenario where the use of transactions can improve reliability. When receiving a group of messages that are related together, maybe in the same message session, it is possible to receive the messages in the peek-lock receive mode, and then complete, defer, or deadletter the messages in one transaction. (In the current version of Service Bus, abandon is not transactional.)   The following code shows how this can be achieved. using (TransactionScope scope = new TransactionScope()) {       while (true)     {         // Receive a message.         BrokeredMessage msg = q1Client.Receive(TimeSpan.FromSeconds(1));         if (msg != null)         {             // Wrote message body and complete message.             string text = msg.GetBody<string>();             Console.WriteLine("Received: " + text);             msg.Complete();         }         else         {             break;         }     }     Console.WriteLine();       // Should we commit?     Console.WriteLine("Commit receive? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     }     Console.WriteLine(); }     Note that if there are a large number of messages to be received, there will be a chance that the transaction may time out before it can be committed. It is possible to specify a longer timeout when the transaction is created, but It may be better to receive and commit smaller amounts of messages within the transaction. It is also possible to complete, defer, or deadletter messages received from more than one subscription, as long as all the subscriptions are contained in the same topic. As subscriptions are not top level messaging entities this scenarios will work. The following code shows how this can be achieved. try {     using (TransactionScope scope = new TransactionScope())     {         // Receive one message from each subscription.         BrokeredMessage msg1 = subscriptionClient1.Receive();         BrokeredMessage msg2 = subscriptionClient2.Receive();           // Complete the message receives.         msg1.Complete();         msg2.Complete();           Console.WriteLine("Msg1: " + msg1.GetBody<string>());         Console.WriteLine("Msg2: " + msg2.GetBody<string>());           // Commit the transaction.         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     Unsupported Scenarios The restriction of only one top level messaging entity being able to participate in a transaction makes some useful scenarios unsupported. As the Windows Azure Service Bus is under continuous development and new releases are expected to be frequent it is possible that this restriction may not be present in future releases. The first is the scenario where messages are to be routed to two different systems. The following code attempts to do this.   try {     // Create a transaction scope.     using (TransactionScope scope = new TransactionScope())     {         BrokeredMessage msg1 = new BrokeredMessage("Message1");         BrokeredMessage msg2 = new BrokeredMessage("Message2");           // Send a message to Queue1         Console.WriteLine("Sending Message1");         queue1Client.Send(msg1);           // Send a message to Queue2         Console.WriteLine("Sending Message2");         queue2Client.Send(msg2);           // Commit the transaction.         Console.WriteLine("Committing transaction...");         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     The results of running the code are shown below. When attempting to send a message to the second queue the following exception is thrown: No active Transaction was found for ID '35ad2495-ee8a-4956-bbad-eb4fedf4a96e:1'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:01:00..TrackingId:947b8c4b-7754-4044-b91b-4a959c3f9192_3_3,TimeStamp:3/29/2012 7:47:32 AM.   Another scenario where transactional support could be useful is when forwarding messages from one queue to another queue. This would also involve more than one top level messaging entity, and is therefore not supported.   Another scenario that developers may wish to implement is performing transactions across messaging entities and other transactional systems, such as an on-premise database. In the current release this is not supported.   Workarounds for Unsupported Scenarios There are some techniques that developers can use to work around the one top level entity limitation of transactions. When sending two messages to two systems, topics and subscriptions can be used. If the same message is to be sent to two destinations then the subscriptions would have the default subscriptions, and the client would only send one message. If two different messages are to be sent, then filters on the subscriptions can route the messages to the appropriate destination. The client can then send the two messages to the topic in the same transaction.   In scenarios where a message needs to be received and then forwarded to another system within the same transaction topics and subscriptions can also be used. A message can be received from a subscription, and then sent to a topic within the same transaction. As a topic is a top level messaging entity, and a subscription is not, this scenario will work.

    Read the article

  • How to call Office365 web service in a Console application using WCF

    - by ybbest
    In my previous post, I showed you how to call the SharePoint web service using a console application. In this post, I’d like to show you how to call the same web service in the cloud, aka Office365.In office365, it uses claims authentication as opposed to windows authentication for normal in-house SharePoint Deployment. For Details of the explanation you can see Wictor’s post on this here. The key to make it work is to understand when you authenticate from Office365, you get your authentication token. You then need to pass this token to your HTTP request as cookie to make the web service call. Here is the code sample to make it work.I have modified Wictor’s by removing the client object references. static void Main(string[] args) { MsOnlineClaimsHelper claimsHelper = new MsOnlineClaimsHelper( "[email protected]", "YourPassword","https://ybbest.sharepoint.com/"); HttpRequestMessageProperty p = new HttpRequestMessageProperty(); var cookie = claimsHelper.CookieContainer; string cookieHeader = cookie.GetCookieHeader(new Uri("https://ybbest.sharepoint.com/")); p.Headers.Add("Cookie", cookieHeader); using (ListsSoapClient proxy = new ListsSoapClient()) { proxy.Endpoint.Address = new EndpointAddress("https://ybbest.sharepoint.com/_vti_bin/Lists.asmx"); using (new OperationContextScope(proxy.InnerChannel)) { OperationContext.Current.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = p; XElement spLists = proxy.GetListCollection(); foreach (var el in spLists.Descendants()) { //System.Console.WriteLine(el.Name); foreach (var attrib in el.Attributes()) { if (attrib.Name.LocalName.ToLower() == "title") { System.Console.WriteLine("> " + attrib.Name + " = " + attrib.Value); } } } } System.Console.ReadKey(); } } You can download the complete code from here. Reference: Managing shared cookies in WCF How to do active authentication to Office 365 and SharePoint Online

    Read the article

  • Service Level Loggin/Tracing

    - by Ahsan Alam
    We all love to develop services, right? First timers want to learn technologies like WCF and Web Services. Some simply want to build services; whereas, others may find services as natural architectural decision for particular systems. Whatever the reason might be, services are commonly used in building wide range of systems. Developers often encapsulates various functionality (small or big) within one or more services, and expose them for multiple applications. Sometimes from day one (and definitely over time) these services may evolve into a set of black boxes. Services or not, black boxes or not, issues and exceptions are sometimes hard to avoid, especially in highly evolving and transactional systems. We can try to be methodical with our unit testing, QA and overall process; but we may not be able to avoid some type of system issues. When issues arise from one or more highly transactional services, it becomes necessary to resolve them very quickly. When systems handle thousands of transaction in matter of hours, some issues may not surface immediately. That is when service level logging becomes very useful. Technologies such as WCF, allow us to enable service level tracing with minimal effort; but that may not provide us with complete picture. Developers may need to add tracing within critical areas of the code with various degrees of verbosity. Programmer can always utilize some logging framework such as the 'Logging Application Block' to get the job done. It may seem overkill sometimes; but I have noticed from my experience that service level logging helps programmer trace many issues very quickly.

    Read the article

  • Calculate Age using Date Field

    - by BRADINO
    So if you have a database table that has DOB borthdays as date fields, this is an easy way to query that table based on age parameters. The following examples assume that the date of birth date field is dob and the table name is people. Find people who are 30 years old SELECT DATE_FORMAT( FROM_DAYS( TO_DAYS( now( ) ) - TO_DAYS( `dob` ) ) , '%Y' ) +0 AS `age` FROM `people` HAVING `age` = 30 Find people who are 31-42 years old SELECT DATE_FORMAT( FROM_DAYS( TO_DAYS( now( ) ) - TO_DAYS( `dob` ) ) , '%Y' ) +0 AS `age` FROM `people` HAVING `age`>= 31 AND `age` <= 42 Find oldest person SELECT MAX(DATE_FORMAT( FROM_DAYS( TO_DAYS( now( ) ) - TO_DAYS( `dob` ) ) , '%Y' ) +0) AS `age` FROM `people` Find youngest person SELECT MIN(DATE_FORMAT( FROM_DAYS( TO_DAYS( now( ) ) - TO_DAYS( `dob` ) ) , '%Y' ) +0) AS `age` FROM `people`

    Read the article

  • How to call Office365 web service in a Console application using WCF

    - by ybbest
    In my previous post, I showed you how to call the SharePoint web service using a console application. In this post, I’d like to show you how to call the same web service in the cloud, aka Office365.In office365, it uses claims authentication as opposed to windows authentication for normal in-house SharePoint Deployment. For Details of the explanation you can see Wictor’s post on this here. The key to make it work is to understand when you authenticate from Office365, you get your authentication token. You then need to pass this token to your HTTP request as cookie to make the web service call. Here is the code sample to make it work.I have modified Wictor’s by removing the client object references. static void Main(string[] args) { MsOnlineClaimsHelper claimsHelper = new MsOnlineClaimsHelper( "[email protected]", "YourPassword","https://ybbest.sharepoint.com/"); HttpRequestMessageProperty p = new HttpRequestMessageProperty(); var cookie = claimsHelper.CookieContainer; string cookieHeader = cookie.GetCookieHeader(new Uri("https://ybbest.sharepoint.com/")); p.Headers.Add("Cookie", cookieHeader); using (ListsSoapClient proxy = new ListsSoapClient()) { proxy.Endpoint.Address = new EndpointAddress("https://ybbest.sharepoint.com/_vti_bin/Lists.asmx"); using (new OperationContextScope(proxy.InnerChannel)) { OperationContext.Current.OutgoingMessageProperties[HttpRequestMessageProperty.Name] = p; XElement spLists = proxy.GetListCollection(); foreach (var el in spLists.Descendants()) { //System.Console.WriteLine(el.Name); foreach (var attrib in el.Attributes()) { if (attrib.Name.LocalName.ToLower() == "title") { System.Console.WriteLine("> " + attrib.Name + " = " + attrib.Value); } } } } System.Console.ReadKey(); } } You can download the complete code from here. Reference: Managing shared cookies in WCF How to do active authentication to Office 365 and SharePoint Online

    Read the article

  • When Do OS Questions Belong on Hardware Service Requests?

    - by Get Proactive Customer Adoption Team
    Untitled Document My Oracle Support—Logging an Operating System Service Request One of the concerns we hear from our customers with Premier Support for Systems is that they have difficulty logging a Service Request (SR) for an operating system issue. Because Premier Support for Systems includes support for the hardware and the associated operating system, you log any operating system issues through a hardware Service Request. To create a hardware Service Request, you enter the information into the Hardware tab of the Create Service Request screen, but to ensure that the hardware Service Request you enter is recognized and routed appropriately for an operating system issue, you need to change the product from your specific hardware to the operating system that the hardware is running. The example below shows you how to create a Service Request for the operating system when the support level is Premier Support for Systems. The key to success is remembering that the operating system coverage is part of the hardware support. To begin, from anywhere within My Oracle Support, click on the Create SR button as you would to log any SR: Enter your Problem Summary and the Problem Description Next, click on the Hardware tab. Enter the System Serial Number (in this case “12345”) and click on Validate Serial Number: Notice that the product name for the hardware indicates “Sunfire T2000 Server” with an option for a drop down List of Values. Click on the product drop down and choose the correct operating system from the list. In this case I have chosen “OpenSolaris Operating System” Next, you will need to enter the correct operating system version: At this point, you may proceed to complete and submit the Service Request. If your company has Premier Support for Systems, just remember that your operating system has coverage under the hardware it runs on, so start with a Hardware tab on the Service Request screen and change the product related information to reflect the operating system you need help with. Following these simple steps will ensure that the system assigns your Service Request to the right support team for an operating system issue and the support engineer can quickly begin working your issue.

    Read the article

  • UPK 3.6.1 Enablement Service Pack 1

    - by marc.santosusso
    UPK 3.6.1 Enablement Service Pack 1 now available on My Oracle Support as Patch ID 9533920 (requires My Oracle Support account). Below is a list of the enhancements included in this Enablement Service Pack. Tabbed Gateway Users now have the option to deliver multiple help resources through the in-application support using UPK's new tabbed gateway. This feature is managed using the Configuration Utility for In-Application Support. This feature is documented in the In-Application Support Guide. Firefox 3.6 The latest release of Mozilla Firefox, version 3.6, is now supported by the UPK Player, SmartHelp browser add-on, and SmartMatch recording technology. Oracle E-Business Suite -- Added support for version 12.1.2 for enhanced object and context recognition. -- The UPK PLL is no longer need for Oracle versions 12.1.2 and higher. Agile PLM Agile PLM version 9.3 supported for enhanced object recognition. Customer Needs Management Customer Needs Management schema 1.0.014 is supported for context recognition. Siebel CRM Siebel CRM (On Premise) versions 8.2, 8.1.1.2, 8.0.0.9, and 8.1.1 build 21112 (in addition to the previously supported build 21111) supported for enhanced object and context recognition. SAP SAP GUI for HTML version 7.10 patch 16 supported for enhanced object and context recognition. CA -- CA Clarity PPM version R12.5 supported for context recognition. -- CA Service Desk version R12.5 supported for context recognition. Java Added support for Java 6 update 12

    Read the article

  • NTP service, offset increasing after sync

    - by Ajay
    I have installed Ubuntu 12.10 version on my PC. I am running NTP service having NTP server as GPS. I found that when we start NTP service by ntp start command, PC is able to sync with GPS as i get '*' symbol before GPS IP when i run ntpq -p command. This remains good for some time and then the * symbol is removed which means that PC is not synchronized to that server. Now, by running command ntpq -p it shows that all parameter are OK but as '*' is removed, slowly offset goes on increasing. remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 7 16 1 2.333 23.799 0.808 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 14 16 3 2.333 23.799 0.879 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 11 16 7 2.333 23.799 1.500 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 8 16 17 2.333 23.799 2.177 below are the last 4 ntp status when sync is lost with GPS ============================================================================== 192.168.100.33 .GPS. 1 u 1 16 377 2.404 1169.94 1.735 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u - 16 377 2.513 1171.80 0.898 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u 15 16 377 2.513 1171.80 0.898 Since, GPS is already available, PC never re-synchronize itself to GPS later ON. I have to restart the ntp service and then PC synchronizes to GPS and '*' symbol arrives.

    Read the article

  • Security in a private web service

    - by Oni
    I am developing a web site and a web service for a small on-line game. Technically, I'll be using Express (node.js) and MongoDB+Redis for the databases. This the structure I came up with: One Express server that will server as the Web Service. This will connect to the databases. One Express server that will provide the web site. It will connect to the Web Service to retrieve and push the information. iOS and Android application will be able to interact with the WebService. Taking into account: It is a small game. The information transferred is not critical. There will NOT be third party applications. At least for the moment. My concern is about which level of security I should use in each of the scenarios: Security of the user playing through web browser Security of the applications and the Web Server connecting to the WS. I have take a look at the different options and: OAuth and/or Https is too much for this scenario, isn't it? Will be a good option to hash the user and password with MD5(or similar) and some salt? I would like to get some directions and investigate by my own rather than getting a response like "you should you use this node.js module..." Thanks in advance,

    Read the article

  • How Service Component Architecture (SCA) Can Be Incorporated Into Existing Enterprise Systems

    After viewing Rob High’s presentation “The SOA Component Model” hosted on InfoQ.com, I can foresee how Service Component Architecture (SCA) can be incorporated in to an existing enterprise. According to IBM’s DeveloperWorks website, SCA is a set of conditions which outline a model for constructing applications/systems using a Service-Oriented Architecture (SOA). In addition, SCA builds on open standards such as Web services. In the future, I can easily see how some large IT shops could potently divide development teams or work groups up into Component/Data Object Groups, and Standard Development Groups. The Component/Data Object Group would only work on creating and maintaining components that are reused throughout the entire enterprise. The Standard Development Group would work on new and existing projects that incorporate the use of various components to accomplish various business tasks. In my opinion the incorporation of SCA in to any IT department will initially slow down the number of new features developed due to the time needed to create the new and loosely-coupled components. However once a company becomes more mature in its SCA process then the number of program features developed will greatly increase. I feel this is due to the fact that the loosely-coupled components needed in order to add the new features will already be built and ready to incorporate into any new development feature request. References: BEA Systems, Cape Clear Software, IBM, Interface21, IONA Technologies PLC, Oracle, Primeton Technologies Ltd, Progress Software, Red Hat Inc., Rogue Wave Software, SAP AG, Siebel Systems, Software AG, Sun Microsystems, Sybase, TIBCO Software Inc. (2006). Service Component Architecture. Retrieved 11 27, 2011, from DeveloperWorks: http://www.ibm.com/developerworks/library/specification/ws-sca/ High, R. (2007). The SOA Component Model. Retrieved 11 26, 2011, from InfoQ: http://www.infoq.com/presentations/rob-high-sca-sdo-soa-programming-model

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

  • Installing Visual Studio 2010 Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. This post seams almost redundant as I had no problems, but I think that is as valuable to other thinking of installing the Service Pack as all the problems that we sometimes get. As per Brian's post I am Installing Visual Studio Team Foundation Server Service Pack 1 first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. Figure: Hopefully this will be more uneventful It takes a little while for your system to be checked to see what components need updating. On my main computer this was pretty quick, but on the laptop it took some time. Figure: There are a lot of components to update With this update also comes an update to .NET as well as many other components. Figure: I downloaded the full 1.5GB’s, but you could do a web install It depends on how good you internet connection is to how long it would take to download, but as I am now in the US I decided not to trust the internet connection speeds. It took around 30-40 minutes to download the full thing which is a little slow. Figure: I did not need to download, but that would increase the install time So on my main computer again this was fast, but again on my netbook this took a little while. Figure: The actual install took around 30-40 minutes (2 hours on netbook) I was pretty impressed with the speed of the install, and as Team Explore is now out of the box with Visual Studio 2010 I don’t get the problem of the SP being installed before Team Explorer and having a disjointed experience Figure: As I suspected, no problems with the install Figure: Checking in Visual Studio shows that all the servicing points were successful This was an easy experience even if the SP was over 1.5GB’s to download Hopefully I will be discovering things that work better for a good while to come, as well as not seeing holes in the product that I had no encountered yet. What were your experiences of installing Visual Studio 2010 Service pack 1?

    Read the article

  • Planning and Budgeting Cloud Service - Partner Webcast

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Please join us for a 90 minutes live Partner Webcast which will overview the upcoming Oracle Planning and Budgeting Cloud Service (PBCS) offering on Tuesday, 26th November, 2013 at 5:00 pm CET / 4:00 pm UK. Look out for the joining URL and instruction in my November Newsletter coming soon. As a reminder, there was also a Partner Webcast recorded in August 2103 about PBCS which included a demo. Replay link here. Topics include: Latest news from Product Management; live demo; overview of assets and collaterals; Q&A session Oracle Planning and Budgeting Cloud Service (PBCS) offers organizations the market-leading Oracle Hyperion Planning and Budgeting solution delivered via Oracle’s public cloud service. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • ODEE Green Field (Windows) Part 2 - WebLogic

    - by AndyL-Oracle
    Welcome back to the next installment on how to install Oracle Documaker Enterprise Edition onto a green field environment! In my previous post, I went over some basic introductory information and we installed the Oracle database. Hopefully you've completed that step successfully, and we're ready to move on - so let's dive in! For this installment, we'll be installing WebLogic 10.3.6, which is a prerequisite for ODEE 12.3 and 12.2. Prior to installing the WebLogic application server, verify that you have met the software prerequisites. Review the documentation – specifically you need to make sure that you have the appropriate JDK installed. There are advisories if you are using JDK 1.7. These instructions assume you are using JDK 1.6, which is available here. The first order of business is to unzip the installation package into a directory location. This procedure should create a single file, wls1036_generic.jar. Navigate to and execute this file by double-clicking it. This should launch the installer. Depending on your User Account Control rights you may need to approve running the setup application. Once the installer application opens, click Next. Select your Middleware Home. This should be within your ORACLE_HOME. The default is probably fine. Click Next. Uncheck the Email option. Click Yes. Click Next. Click Yes Click Yes and Yes again (yes, it’s quite circular). Check that you wish to remain uninformed and click Continue. Click Custom and Next. Uncheck Evaluation Database and Coherence, then click Next. Select the appropriate JDK. This should be a 64-bit JDK if you’re running a 64-bit OS. You may need to browse to locate the appropriate JAVA_HOME location. Check the JDK and click Next. Verify the installation directory and click Next. Click Next. Allow the installation to progress… Uncheck Run Quickstart and click Done.  And that's it! It's all quite painless - so let's proceed on to set up SOA Suite, shall we? 

    Read the article

  • Emailing Service: To or Bcc?

    - by Shelakel
    I'm busy coding a reusable e-mail service for my company. The e-mail service will be doing quite a few things via injection through the strategy pattern (such as handling e-mail send rate throttling, switching between Smtp and AmazonSES or Google AppEngine for e-mail clients when daily quotas are exceeded, send statistics tracking (mostly because it is neccessary in order to stay within quotas) to name a few). Because e-mail sending will need to be throttled and other limitations exist (ex. max recipient quota on AmazonSES limiting recipients to 50 per send), the e-mails typically need to be broken up. From your experience, would it be better to send bulk (multiple recipients per e-mail) or a single e-mail per recipient? The implications of the above would be to send to a 1000 recipients, with a limit of 50 per send, you would send 20 e-mails using BCC in a newsletter scenario. When sending an e-mail per recipient, it would send 1000 e-mails. E-mail sending is asynchronous (due to inherit latency when sending, it's typically only possible to send 5 e-mails per second unless you are using multiple client asynchronously). Edit Just for full disclosure, this service won't be used by or sold to spammers and will as far as possible automatically comply with national and international laws. Closed< Thanks for all the valuable feedback. The concerns regarding compliance towards laws, user experience (generic vs. personalized unsubscribe) and spam regulation via ISP blacklisting does make To the preferred and possibly the only choice when sending system generated e-mails to recipients.

    Read the article

  • How can I set up .Net unhanlded exception handling in a windows service?

    - by Mike Pateras
    protected override void OnStart(string[] args) { AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException); Thread.Sleep(10000); throw new Exception(); } void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { } I attached a debugger to the above code in my windows service, setting a breakpoint in CurrentDomain_UnhandledException, but it was never hit. The exception pops up saying that it is unhandled, and then the service stops. I even tried putting some code in the event handler, in case it was getting optimized away. Is this not the proper way to set up unhandled exception handling in a windows service?

    Read the article

  • Great Customer Service Example

    - by MightyZot
    A few days ago I wrote about what I consider a poor customer service interaction with TiVo, a company that I have been faithful to for the past 12 years or so. In that post I talked about how they helped me, but I felt like I was doing something wrong at the end of the call – when in reality I was just following through with an offer that TiVo made possible through my cable company. Today I had a wonderful customer service interaction with American Express, another company that I have been loyal to for many years.(I am a Gold Card member.) I like my Amex card because I can use it for big purchases and it forces me to pay them off at the end of the month. Well, the reality is that I’m not always so good at doing that, so sometimes my payments are over a couple of months.  :) A few days ago I received an email from “American Express” fraud detection. The email stated that I should call a toll free number and have the last four digits of my card handy. I grew up during the BBS era with some creative and somewhat mischievous friends. I’ve learned to be extremely cautious with regard to my online life! So, I did what you would expect…I sent them a nice reply that said “Go screw yourself.” For the past couple of days someone has been trying to call me and I assumed it was the same prankster trying to get the last four digits of my card. The last caller left a message indicating that they were from American Express and they wanted to talk to me about my card. After looking up their customer service numbers on the www.americanexpress.com web site, I called and was put through to the fraud detection group. The rep explained that there were some charges on my wife’s card that did not fit our purchase profile. She went through each charge and, for the most part, they looked like charges my wife may have made. My wife had asked to use the card for some Christmas shopping during the same timeframe as the charges. The American Express rep very politely explained that these looked out of character to her. She continued through the charges. She listed a charge for $160 – at this point my adrenaline started kicking in. My wife said she was going to charge about $25 or $30 dollars, not $160. Next, the rep listed a charge for over $1200. Uh oh!! Now I know that my account has been compromised. I informed the rep that we definitely did not make those charges. She replied with, “that’s ok Mr Pope, we declined those charges as well as some others.” We went through the pending charges and there were a couple more that were questionable. The rep very patiently waited while I called my wife on my office phone to verify the charges. Sure enough, my wife had not ordered anything from Netflix or purchased anything with Yahoo Wallet! “No problem Mr Pope, we will remove those charges as well.” “We are going to cancel your wife’s card and send her a new one. She will receive it by 7pm tomorrow via Federal Express. Please watch your statements over the next couple of months. If you notice anything fishy, give us a call and we will take care of it for you.” (Wow, I’m thinking to myself!) “Is there anything else I can help you with Mr Pope?” “Nope, thank you very much for catching this so early and declining those charges!”, I said smiling. Apparently she could hear me smiling on the other end of the phone line because she replied with “keep smiling Mr Pope and have a good rest of your week.” Now THAT’s customer service!  Thank you American Express!!! I shall remain an ever faithful customer. Interesting…

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Culture Shmulture?

    - by steve.diamond
    I've been thinking about "Customer Experience Management" lately. Here at Oracle, we arguably have the most complete suite of applications for managing the customer experience across and in the context of multiple channels -- from marketing to loyalty to contact center to self-service to analytics offerings, and more. And stay tuned, because in coming months let's just say we'll have even more to talk about on this front. But that said............ Last weekend my wife and I stayed at one of the premiere hotel chains on the planet. I won't name them, but we all know the short list. It could have been the St. Regis or the Ritz Carlton or Four Seasons or Hyatt Park or....This stay, at this particular hotel, was simply outstanding. Within a chain known for providing "above and beyond" levels of service, this particular hotel, under this particular manager, exceeded expectations on so many fronts. For example, at the Spa we mentioned to the two attendants that my wife is seven months pregnant and that we had previously had a lot of trouble conceiving. We then went to our room. Ten minutes later we heard a knock at the door and received a plate of chocolate covered strawberries with a heartfelt note and an inspiring quote, signed by the two spa attendees. The following day we arranged to have a bellhop drive us to the beach. Although they had a pre-arranged beach shuttle service with time limits, etc., he greeted us by saying, "I'm yours for the day until 4 p.m. Whatever you want to do is fine by me, as long as it's legal!" The morning that we left we arranged to have a taxi drive us to the airport--a nearly 40 mile drive. What showed up was a private coach complete with navy blue suited driver dude. And we were charged the taxi fare price. And there were many other awesome exchanges I won't mention here, although I did email the GM of this hotel two nights ago and expressed our effusive praise and gratitude. I'd submit that this hotel chain would have a definitive advantage using even more Oracle software to manage and optimize its customer interactions (yes, they are a customer). But WITHOUT the culture--that management team--and that instillation of aligned values across all employees of exemplifying 'the golden rule,' I wonder how much technology really matters in providing a distinctively positive and memorable customer experience. Lest you think I'm alone in these pontifications, have you read Paul Greenberg's blog lately? Have you seen one of his most recent posts? Now this SPECIFIC post is NOT about customer service per se. But it is about people. So yes, please think long and hard about the technology you seek to deploy. But never forget who will be interacting with your systems, and your customers.

    Read the article

  • Why is there no service-oriented language?

    - by Wolfgang
    Edit: To avoid further confusion: I am not talking about web services and such. I am talking about structuring applications internally, it's not about how computers communicate. It's about programming languages, compilers and how the imperative programming paradigm is extended. Original: In the imperative programming field, we saw two paradigms in the past 20 years (or more): object-oriented (OO), and service-oriented (SO) aka. component-based (CB). Both paradigms extend the imperative programming paradigm by introducing their own notion of modules. OO calls them objects (and classes) and lets them encapsulates both data (fields) and procedures (methods) together. SO, in contrast, separates data (records, beans, ...) from code (components, services). However, only OO has programming languages which natively support its paradigm: Smalltalk, C++, Java and all other JVM-compatibles, C# and all other .NET-compatibles, Python etc. SO has no such native language. It only comes into existence on top of procedural languages or OO languages: COM/DCOM (binary, C, C++), CORBA, EJB, Spring, Guice (all Java), ... These SO frameworks clearly suffer from the missing native language support of their concepts. They start using OO classes to represent services and records. This leads to designs where there is a clear distinction between classes that have methods only (services) and those that have fields only (records). Inheritance between services or records is then simulated by inheritance of classes. Technically, its not kept so strictly but in general programmers are adviced to make classes to play only one of the two roles. They use additional, external languages to represent the missing parts: IDL's, XML configurations, Annotations in Java code, or even embedded DSL like in Guice. This is especially needed, but not limited to, since the composition of services is not part of the service code itself. In OO, objects create other objects so there is no need for such facilities but for SO there is because services don't instantiate or configure other services. They establish an inner-platform effect on top of OO (early EJB, CORBA) where the programmer has to write all the code that is needed to "drive" SO. Classes represent only a part of the nature of a service and lots of classes have to be written to form a service together. All that boiler plate is necessary because there is no SO compiler which would do it for the programmer. This is just like some people did it in C for OO when there was no C++. You just pass the record which holds the data of the object as a first parameter to the procedure which is the method. In a OO language this parameter is implicit and the compiler produces all the code that we need for virtual functions etc. For SO, this is clearly missing. Especially the newer frameworks extensively use AOP or introspection to add the missing parts to a OO language. This doesn't bring the necessary language expressiveness but avoids the boiler platform code described in the previous point. Some frameworks use code generation to produce the boiler plate code. Configuration files in XML or annotations in OO code is the source of information for this. Not all of the phenomena that I mentioned above can be attributed to SO but I hope it clearly shows that there is a need for a SO language. Since this paradigm is so popular: why isn't there one? Or maybe there are some academic ones but at least the industry doesn't use one.

    Read the article

  • ODEE Green Field (Windows) Part 3 - SOA Suite

    - by AndyL-Oracle
     So you're still here, are you? I'm sure you're probably overjoyed at the prospect of continuing with our green field installation of ODEE. In my previous post, I covered the installation of WebLogic - you probably noticed, like I did, that it's a pretty quick install. I'm pretty certain this had everything to do with how quickly the next post made it to the internet! So let's dig in. Make sure you've followed the steps from the initial post to obtain the necessary software and prerequisites! Unpack the RCU (Repository Creation Utility). This ZIP file contains a directory (rcuHome) that should be extracted into your ORACLE_HOME. Run the RCU – execute rcuHome/bin/rcu.bat. Click Next. Select Create and click Next. Enter the database connection details and click Next – any failure to connection will show in the Messages box. Click Ok Expand and select the SOA Infrastructure item. This will automatically select additional required components. You can change the prefix used, but DEV is recommended. If you are creating a sandbox that includes additional components like WebCenter Content and UMS, you may select those schemas as well but they are not required for a basic ODEE installation. Click Next. Click OK. Specify the password for the schema(s). Then click Next. Click Next. Click OK. Click OK. Click Create. Click Close. Unpack the SOA Suite installation files into a single directory e.g. SOA. Run the installer – navigate and execute SOA/Disk1/setup.exe. If you receive a JDK error, switch to a command line to start the installer. To start the installer via command line, do Start?Run?cmd and cd into the SOA\Disk1 directory. Run setup.exe –jreLoc < pathtoJRE >. Ensure you do not use a path with spaces – use the ~1 notation as necessary (your directory must not exceed 8 characters so “Program Files” becomes “Progra~1” and “Program Files (x86)” becomes “Progra~2” in this notation). Click Next. Select Skip and click Next. Resolve any issues shown and click Next. Verify your oracle home locations. Defaults are recommended. Click Next. Select your application server. If you’ve already installed WebLogic, this should be automatically selected for you. Click Next. Click Install. Allow the installation to progress… Click Next. Click Finish. You can save the installation details if you want. That should keep you satisfied for the moment. Get ready, because the next posts are going to be meaty! 

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >