Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 120/679 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Innovation and the Role of Social Media

    - by Brian Dirking
    A very interesting post by Andy Mulholland of CAP Gemini this week – “The CIO is trapped between the CEO wanting innovation and the CFO needing compliance” – had many interesting points: “A successful move in one area won’t be recognized and rapidly implemented in other areas to multiply the benefits, or worse unsuccessful ideas will get repeated adding to the cost and time wasted. That’s where the need to really address the combination of social networking, collaboration, knowledge management and business information is required.” Without communicating what works and what doesn’t, the innovations of our organization may be lost, and the failures repeated. That makes sense. If you liked Andy Mulholland’s blog post, you need to hear Howard Beader’s presentation at Enterprise 2.0 Conference on innovation and the role of social media. (Howard will be speaking in the Market Leaders Session at 1 PM on Wednesday June 22nd). Some of the thoughts Howard will share include: • Innovation is more than just ideas, it’s getting ideas to market, and removing the obstacles that stand in the way • Innovation is about parallel processing – you can’t remove the obstacles one by one because you will get to market too late • Innovation can be about product innovation, but it can also be about process innovation This brings us to Andy’s second issue he raises: "..the need for integration with, and visibility of, processes to understand exactly how the enterprise functions and delivers on its policies…" Andy goes on to talk about this from the perspective of compliance and the CFO’s concerns. And it’s true: innovation can come both in product innovation, but also internal process innovation. And process innovation can have as much impact as product innovation.  New supply chain models can disrupt an industry overnight. Many people ignore process innovation as a benefit of social business, because it is perceived as a bottom line rather than top line impact. But it can actually impact your top line by changing your entire business model. Oracle WebCenter sits at this crossroads between product innovation and process innovation, enabling you to drive go-to-market innovations through internal social media tools, removing obstacles in parallel, and also providing you deep insight into your processes so you can identify bottlenecks and realize whole new ways of doing business. Learn more about how at the Enterprise 2.0 Conference, where Oracle will be in booth #213 showing Oracle WebCenter and Oracle Fusion Applications.

    Read the article

  • Oracle University has released “Oracle AIA Foundation Pack 11g: Developing Applications” in the Training on Demand format (TOD)

    - by Lionel Dubreuil
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} In this course, you will learn how to quickly develop integrations using Application Integration Architecture (AIA) Foundation Pack 11g that run on Oracle Fusion Middleware. You’ll learn to: Design and create Application Business Connector Services to integrate applications into AIA Create Enterprise Business Services to perform specific business activities Configure Guaranteed Message Delivery to ensure no loss of messages Extend Enterprise Business Objects and Application Business Connector Services to meet Corporate requirements This course is available now in Training on Demand format. Training On Demand Features are: Delivered by top instructors Video of classroom lecture, whiteboarding, labs Hands-on practice environment Ask your instructor Bonus material from product experts Why Choose On Demand? Start training within 24 hours Get full classroom content online Customize your learning experience Eliminate travel-related expenses Access anytime, anywhere 24/7 You'll find more information here.

    Read the article

  • API Message Localization

    - by Jesse Taber
    In my post, “Keep Localizable Strings Close To Your Users” I talked about the internationalization and localization difficulties that can arise when you sprinkle static localizable strings throughout the different logical layers of an application. The main point of that post is that you should have your localizable strings reside as close to the user-facing modules of your application as possible. For example, if you’re developing an ASP .NET web forms application all of the localizable strings should be kept in .resx files that are associated with the .aspx views of the application. In this post I want to talk about how this same concept can be applied when designing and developing APIs. An API Facilitates Machine-to-Machine Interaction You can typically think about a web, desktop, or mobile application as a collection “views” or “screens” through which users interact with the underlying logic and data. The application can be designed based on the assumption that there will be a human being on the other end of the screen working the controls. You are designing a machine-to-person interaction and the application should be built in a way that facilitates the user’s clear understanding of what is going on. Dates should be be formatted in a way that the user will be familiar with, messages should be presented in the user’s preferred language, etc. When building an API, however, there are no screens and you can’t make assumptions about who or what is on the other end of each call. An API is, by definition, a machine-to-machine interaction. A machine-to-machine interaction should be built in a way that facilitates a clear and unambiguous understanding of what is going on. Dates and numbers should be formatted in predictable and standard ways (e.g. ISO 8601 dates) and messages should be presented in machine-parseable formats. For example, consider an API for a time tracking system that exposes a resource for creating a new time entry. The JSON for creating a new time entry for a user might look like: 1: { 2: "userId": 4532, 3: "startDateUtc": "2012-10-22T14:01:54.98432Z", 4: "endDateUtc": "2012-10-22T11:34:45.29321Z" 5: }   Note how the parameters for start and end date are both expressed as ISO 8601 compliant dates in UTC. Using a date format like this in our API leaves little room for ambiguity. It’s also important to note that using ISO 8601 dates is a much, much saner thing than the \/Date(<milliseconds since epoch>)\/ nonsense that is sometimes used in JSON serialization. Probably the most important thing to note about the JSON snippet above is the fact that the end date comes before the start date! The API should recognize that and disallow the time entry from being created, returning an error to the caller. You might inclined to send a response that looks something like this: 1: { 2: "errors": [ {"message" : "The end date must come after the start date"}] 3: }   While this may seem like an appropriate thing to do there are a few problems with this approach: What if there is a user somewhere on the other end of the API call that doesn’t speak English?  What if the message provided here won’t fit properly within the UI of the application that made the API call? What if the verbiage of the message isn’t consistent with the rest of the application that made the API call? What if there is no user directly on the other end of the API call (e.g. this is a batch job uploading time entries once per night unattended)? The API knows nothing about the context from which the call was made. There are steps you could take to given the API some context (e.g.allow the caller to send along a language code indicating the language that the end user speaks), but that will only get you so far. As the designer of the API you could make some assumptions about how the API will be called, but if we start making assumptions we could very easily make the wrong assumptions. In this situation it’s best to make no assumptions and simply design the API in such a way that the caller has the responsibility to convey error messages in a manner that is appropriate for the context in which the error was raised. You would work around some of these problems by allowing callers to add metadata to each request describing the context from which the call is being made (e.g. accepting a ‘locale’ parameter denoting the desired language), but that will add needless clutter and complexity. It’s better to keep the API simple and push those context-specific concerns down to the caller whenever possible. For our very simple time entry example, this can be done by simply changing our error message response to look like this: 1: { 2: "errors": [ {"code": 100}] 3: }   By changing our error error from exposing a string to a numeric code that is easily parseable by another application, we’ve placed all of the responsibility for conveying the actual meaning of the error message on the caller. It’s best to have the caller be responsible for conveying this meaning because the caller understands the context much better than the API does. Now the caller can see error code 100, know that it means that the end date submitted falls before the start date and take appropriate action. Now all of the problems listed out above are non-issues because the caller can simply translate the error code of ‘100’ into the proper action and message for the current context. The numeric code representation of the error is a much better way to facilitate the machine-to-machine interaction that the API is meant to facilitate. An API Does Have Human Users While APIs should be built for machine-to-machine interaction, people still need to wire these interactions together. As a programmer building a client application that will consume the time entry API I would find it frustrating to have to go dig through the API documentation every time I encounter a new error code (assuming the documentation exists and is accurate). The numeric error code approach hurts the discoverability of the API and makes it painful to integrate with. We can help ease this pain by merging our two approaches: 1: { 2: "errors": [ {"code": 100, "message" : "The end date must come after the start date"}] 3: }   Now we have an easily parseable numeric error code for the machine-to-machine interaction that the API is meant to facilitate and a human-readable message for programmers working with the API. The human-readable message here is not intended to be viewed by end-users of the API and as such is not really a “localizable string” in my opinion. We could opt to expose a locale parameter for all API methods and store translations for all error messages, but that’s a lot of extra effort and overhead that doesn’t add a lot real value to the API. I might be a bit of an “ugly American”, but I think it’s probably fine to have the API return English messages when the target for those messages is a programmer. When resources are limited (which they always are), I’d argue that you’re better off hard-coding these messages in English and putting more effort into building more useful features, improving security, tweaking performance, etc.

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • FREE Online Azure Workshop includes a **FREE Azure Account**

    - by Jim Duffy
    My friend and all around good guy, Microsoft Developer Evangelist for the Carolinas, Brian Hitney, along with fellow Microsofties Jim O’Neil and John McClelland will be presenting a FREE Windows Azure online workshop tomorrow, Tuesday, May 4th from 7pm-9pm. What? You can’t make it Tuesday evening? Not to worry. This webcast will be repeated again a number of times over the next month or so. Taken from Brian’s blog post about it: “Elevate your skills with Windows Azure in this hands-on workshop! In this event we’ll guide you through the process of building and deploying a large scale Azure application. Forget about “hello world”! In less than two hours we’ll build and deploy a real cloud app that leverages the Azure data center and helps make a difference in the world. Yes, in addition to building an application that will leave you with a rock-solid understanding of the Azure platform, the solution you deploy will contribute back to Stanford’s Folding@home distributed computing project. There’s no cost to you to participate in this session; each attendee will receive a temporary, self-expiring, full-access account to work with Azure for a period of 2-weeks.” Did you catch that last sentence??  “each attendee will receive a temporary, self-expiring, full-access account to work with Azure for a period of 2-weeks.” A FREE, full-access, Windows Azure account to experiment and learn with? Now we’re talking. For more information check out Brian’s blog post or head here. Have a day. :-|

    Read the article

  • One Api Pilot

    - by Manish Agrawal
    Presentations made at Mobile World Congress, MWC 2010, on the Canadian OneAPI Pilot by Graham Trickey (GSMA), and Shane Logan (Telus). Thanks Alan for sharing it.

    Read the article

  • create record in LOV's Popup's

    - by raghu.yadav
    In this post we see ways to present create record options in LOV's popup's.Referring the doc http://download.oracle.com/docs/cd/E12839_01/web.1111/b31973/af_lov.htmwhat doc says about create action: The popup dialog from within an inputListOfValues component or the optional search popup dialog in the inputComboboxListOfValues component also provides the ability to create a new record. For the inputListOfValues component, when the createPopupId attribute is set on the component, a toolbar component with a commandToolbarButton is displayed with a create icon. At runtime, a commandToolbarButton component appears in the LOV popup dialog,

    Read the article

  • Extension Methods in Dot Net 2.0

    - by Tom Hines
    Not that anyone would still need this, but in case you have a situation where the code MUST be .NET 2.0 compliant and you want to use a cool feature like Extension methods, there is a way.  I saw this article when looking for ways to create extension methods in C++, C# and VB:  http://msdn.microsoft.com/en-us/magazine/cc163317.aspx The author shows a simple  way to declare/define the ExtensionAttribute so it's available to 2.0 .NET code. Please read the article to learn about the when and why and use the content below to learn HOW. In the next post, I'll demonstrate cross-language calling of extension methods. Here is a version of it in C# First, here's the project showing there's no VOODOO included: using System; namespace System.Runtime.CompilerServices {    [       AttributeUsage(          AttributeTargets.Assembly          | AttributeTargets.Class          | AttributeTargets.Method,       AllowMultiple = false, Inherited = false)    ]    class ExtensionAttribute : Attribute{} } namespace TestTwoDotExtensions {    public static class Program    {       public static void DoThingCS(this string str)       {          Console.WriteLine("2.0\t{0:G}\t2.0", str);       }       static void Main(string[] args)       {          "asdf".DoThingCS();       }    } }   Here is the C++ version: // TestTwoDotExtensions_CPP.h #pragma once using namespace System; namespace System {        namespace Runtime {               namespace CompilerServices {               [                      AttributeUsage(                            AttributeTargets::Assembly                             | AttributeTargets::Class                            | AttributeTargets::Method,                      AllowMultiple = false, Inherited = false)               ]               public ref class ExtensionAttribute : Attribute{};               }        } } using namespace System::Runtime::CompilerServices; namespace TestTwoDotExtensions_CPP { public ref class CTestTwoDotExtensions_CPP {    public:            [ExtensionAttribute] // or [Extension]            static void DoThingCPP(String^ str)    {       Console::WriteLine("2.0\t{0:G}\t2.0", str);    } }; }

    Read the article

  • Demonstration VM BIC2g 2013-10 Partner Edition for Download

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 UPDATED ! The “BIC2g” demo VM (now version 2013-10) is downloadable from our BIC2g Beehive Online Workspace portal for OPN member partners. Compared to the prior version, Bic2g 2013-04, the new Bic2g 2013-10 has: OBIEE was upgraded from 11.1.1.7.0 to 11.1.1.7.1. with BI Mobile Application Designer (BIMAD) added. TimesTen was upgraded from 11.2.2.3.0 to 11.2.2.5.0 ODI Client 11.1.1.7.0 was installed, including a Standalone agent and empty repositories, and the BI Applications 11g ODI Repositories were included (BIAPPS_11g) Informatica and DAC were removed from image There are additional demos for BI-Apps and for Endeca. The compact deployment of EPM is installed and configured, including: Hyperion Foundation, Essbase, Essbase Studio, Essbase Administration Services, Provider Services, Calculation Manager, Planning, Reporting and Analysis, Financial Reporting, Web Analysis, and Workspace. The access details, for OPN member partners only, to get added to the BIC2G Beehive Online Workspace portal are shown from this page @ BI Solutions Engineering Partner Portal. This Oracle Business Intelligence Linux VM virtual appliance (“BIC2g”) was developed to support Oracle OBI, BI-Apps and EPM Hyperion sales and Oracle partners in product demonstrations, training activities and POC activities.  If you do not need BI-Apps, then there is a slightly smaller VM OBI Sample-App you can get from OTN: see @ Oracle BI and EPM Demonstration SampleApp V309 on OTN. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • JavaOne Afterglow by Simon Ritter

    - by JuergenKress
    Last week was the eighteenth JavaOne conference and I thought it would be a good idea to write up my thoughts about how things went. Firstly thanks to Yoshio Terada for the photos, I didn't bother bringing a camera with me so it's good to have some pictures to add to the words. Things kicked off full-throttle on Sunday.  We had the Java Champions and JUG leaders breakfast, which was a great way to meet up with a lot of familiar faces and start talking all things Java.  At midday the show really started with the Strategy and Technical Keynotes.  This was always going to be tougher job than some years because there was no big shiny ball to reveal to the audience.  With the Java EE 7 spec being finalised a few months ago and Java SE 8, Java ME 8 and JDK8 not due until the start of next year there was not going to be any big announcement.  I thought both keynotes worked really well each focusing on the things most important to Java developers: Strategy One of the things that is becoming more and more prominent in many companies marketing is the Internet of Things (IoT).  We've moved from the conventional desktop/laptop environment to much more mobile connected computing with smart phones and tablets.  The next wave of the internet is not just billions of people connected, but 10s or 100s of billions of devices connected to the network, all generating data and providing much more precise control of almost any process you can imagine.  This ties into the ideas of Big Data and Cloud Computing, but implementation is certainly not without its challenges.  As Peter Utzschneider explained it's about three Vs: Volume, Velocity and Value.  All these devices will create huge volumes of data at very high speed; to avoid being overloaded these devices will need some sort of processing capabilities that can filter the useful data from the redundant.  The raw data then needs to be turned into useful information that has value.  To make this happen will require applications on devices, at gateways and on the back-end servers, all very tightly integrated.  This is where Java plays a pivotal role, write once, run everywhere becomes essential, having nine million developers fluent in the language makes it the defacto lingua franca of IoT.  There will be lots more information on how this will become a reality, so watch this space. Technical How do we make the IoT a reality, technically?  Using the game of chess Mark Reinhold, with the help of people like John Ceccarelli, Jasper Potts and Richard Bair, showed what you could do.  Using Java EE on the back end, Java SE and JavaFX on the desktop and Java ME Embedded and JavaFX on devices they showed a complete end-to-end demo. This was really impressive, using 3D features from JavaFX 8 (that's included with JDK8) to make a 3D animated Duke chess board.  Jasper also unveiled the "DukePad" a home made tablet using a Raspberry Pi, touch screen and accelerometer. Although the Raspberry Pi doesn't have earth shattering CPU performance (about the same level as a mid 1990s Pentium), it does have really quite good GPU performance so the GUI works really well.  The plans are all open sourced and available here.  One small, but very significant announcement was that Java SE will now be included with the NOOB and Raspbian Linux distros provided by the Raspberry Pi foundation (these can be found here).  No more hassle having to download and install the JDK after you've flashed your SD card OS image.  The finale was the Raspberry Pi powered chess playing robot.  Really very, very cool.  I talked to Jasper about this and he told me each of the chess pieces had been 3D printed and then he had to use acetone to give them a glossy finish (not sure what his wife thought of him spending hours in the kitchen in a gas mask!)  The way the robot arm worked was very impressive as it did not have any positioning data (like a potentiometer connected to each motor), but relied purely on carefully calibrated timings to get the arm to the right place.  Having done things like this myself in the past I know how easy it is to find a small error gets magnified into very big mistakes. Here's some pictures from the keynote: The "Dukepad" architecture Nice clear perspex case so you can see the innards. The very nice 3D chess set.  Maya's obviously a great tool. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Simon Ritter,Java One,OOW,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • MSCRM Tax default to Zero

    - by MarkPearl
    I have been playing around with MSCRM4 lately. It has been interesting going. I had a problem getting the tax to reflect correctly, it was defaulting at zero. Eventually I found a solution after scouring the web for a while... see steps below…                 Add the following code to OnSave and OnLoad events of quotedetails form with (crmForm.all) { try { var dTax = (baseamount.DataValue - manualdiscountamount.DataValue) *15.5 /100;         tax.DataValue = dTax;         extendedamount.DataValue = baseamount.DataValue - manualdiscountamount.DataValue + tax.DataValue ; } catch(e) {         alert(e.message); } } // with Don’t forget to publish your changes once you are done and to test.

    Read the article

  • Ameristar Wins with Oracle GoldenGate’s Heterogeneous Real-Time Data Integration

    - by Irem Radzik
    Today we announced a press release about another successful project with Oracle GoldenGate. This time at Ameristar. Ameristar is a casino gaming company and needed a single data integration solution to connect multiple heterogeneous systems to its Teradata data warehouse. The project involves integration of Ameristar’s promotional and gaming data from 14 data sources across its 7 casino hotel properties in real time into a central Teradata data warehouse. The source systems include the Aristocrat gaming and MGT promotional management platforms running on Microsoft SQL Server 2000 databases. As you can notice, there was no Oracle Database involved in this project, but Ameristar’s IT leadership knew that  GoldenGate’s strong heterogeneous and real-time data integration capabilities is the right technology for their data warehousing project. With GoldenGate Ameristar was able to reduce data latency to the enterprise data warehouse, and use this real-time customer information for marketing teams in improving overall customer experience. Ameristar customers receive more targeted and timely campaign offers, and the company has more up-to-date visibility into financial metrics of the company. One other key benefit the company experienced with GoldenGate is in operational costs. The previous data capture solution Ameristar used was trigger based and required a lot of effort to manage. They needed dedicated IT staff to maintain it. With GoldenGate, the solution runs seamlessly without needing a fully-dedicated staff, giving the IT team at Ameristar more resources for their other IT projects. If you want to learn more about GoldenGate and the latest features for Oracle Database and non-Oracle databases, please watch our on demand webcast about Oracle GoldenGate 11g Release 2.

    Read the article

  • Gestire la relazione con il fornitore: strategie, processi, strumenti

    - by antonella.buonagurio(at)oracle.com
    Si é svolto il 3 Marzo un interessante incontro sul tema delle relazioni fra fornitori ed ufficio acquisti. Cesare Businelli , Direttore Generale Italia dell' European Institute of Purchasing Management ha illustrato, in un tempo purtoppo inferiore al necessario, come gestire le relazioni e la collaborazione con i fornitori strategici per creare valore, portando numerosi esempi di successo e stimolando l'uditorio, composto dai responsabili acquisti di piu di 20 aziende. A seguire Lino Campofiorito - Procurement Solutions Sales Consultant di Oracle ha illustrato alcune delle soluzioni informatiche a supporto. Qui potrete trovare le slides. Al termine dell'incontro molte domande per i relatori a conferma dell'interesse del tema.  Oracle Procurement Channel View more presentations from antobng82.

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • Oracle Database 11.2.0.4 Certified with EBS on Microsoft Windows Server

    - by John Abraham
    As a follow up to to a previous announcement, Oracle Database 11g Release 2 (11.2.0.4) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following Microsoft Windows Server operating systems: Release 12.2 (12.2.3 and higher): Microsoft Windows x64 (64-bit) (2008 R2) Release 12.1 (12.1.1 and higher): Microsoft Windows Server (32-bit) (2003, 2008) Microsoft Windows x64 (64-bit) (20031, 20081, 2008 R22) Release 12.0 (12.0.4 and higher): Microsoft Windows Server (32-bit) (2003) Microsoft Windows x64 (64-bit) (2003, 2008, 2008 R2)1 Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher):: Microsoft Windows Server (32-bit) (2003, 20081) Microsoft Windows x64 (64-bit) (2003, 2008, 2008 R2)1 Notes: 1: This OS is a 'database tier only' or 'split tier configuration' platform where the application tier must be on a fully certified E-Business Suite platform. 2: This OS is a 'database tier only' platform for Release 11i. For 12.1.1 or higher, it is also supported on the application tier via the migration process outlined in My Oracle Support Document 1188535.1. This announcement for Oracle E-Business Suite 11i and R12 includes: Oracle Database 11gR2 version 11.2.0.4 Oracle Database 11gR2 version 11.2.0.4 Real Application Clusters (RAC) Oracle Database Vault 11gR2 version 11.2.0.4 Transparent Data Encryption (Column Encryption) using Oracle Database 11gR2 version 11.2.0.4 TDE Tablespace Encryption using Oracle Database 11gR2 version 11.2.0.4 Advanced Security Option (ASO)/Advanced Networking Option (ANO) with Oracle Database 11gR2 version 11.2.0.4 Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 Certification data in My Oracle Support (http://support.oracle.com) has been updated with this certification - please review the documents below for all requirements and additional details: Where can I find more information? MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Dcoument 1623879.1 - Interoperability Notes - Oracle E-Business Suite Release 12.2 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 946413.1 - Using Oracle Applications with a Split Configuration Database Tier on Oracle Release 11g Release 2 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 376700.1 - Enabling SSL in Oracle Application Release 12 MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g MOS Document 1188535.1 - Migrating Oracle E-Business Suite R12 to Microsoft Windows Server 2008 R2 MOS Dcoument 1349240.1 - Database Preparation Guidelines for an Oracle E-Business Suite Release 12.2 Upgrade MOS Document 1594274.1 - Oracle E-Business Suite Release 12.2: Consolidated List of Patches and Technology Bug Fixes Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites.

    Read the article

  • Portal And Content - Content Integration - Best Practices

    - by Stefan Krantz
    Lately we have seen an increase in projects that have failed to either get user friendly content integration or non satisfactory performance. Our intention is to mitigate any knowledge gap that our previous post might have left you with, therefore this post will repeat some recommendation or reference back to old useful post. Moreover this post will help you understand ground up how to design, architect and implement business enabled, responsive and performing portals with complex requirements on business centric information publishing. Design the Information Model The key to successful portal deployments is Information modeling, it's a key task to understand the use case you designing for, therefore I have designed a set of question you need to ask yourself or your customer: Question: Who will own the content, IT or Business? Answer: BusinessQuestion: Who will publish the content, IT or Business? Answer: BusinessQuestion: Will there be multiple publishers? Answer: YesQuestion: Are the publishers computer scientist?Answer: NoQuestion: How often do the information changes, daily, weekly, monthly?Answer: Daily, weekly If your answers to the questions matches at least 2, we strongly recommend you design your content with following principles: Divide your pages in to logical sections, where each section is marked with its purpose Assign capabilities to each section, does it contain text, images, formatting and/or is it static and is populated through other contextual information Select editor/design element type WYSIWYG - Rich Text Plain Text - non-format text Image - Image object Static List - static list of formatted informationDynamic Data List - assembled information from multiple data files through CMIS query The result of such design map could look like following below examples: Based on the outcome of the required elements in the design column 3 from the left you will now simply design a data model in WebCenter Content - Site Studio by creating a Region Definition structure matching your design requirements.For more information on how to create a Region definition see following post: Region Definition Post - note see instruction 7 for details. Each region definition can now be used to instantiate data files, a data file will hold the actual data for each element in the region definition. Another way you can see this is to compare the region definition as an extension to the metadata model in WebCenter Content for each data file item. Design content templates With a solid dependable information model we can now proceed to template creation and page design, in this phase focuses on how to place the content sections from the region definition on the page via a Content Presenter template. Remember by creating content presenter templates you will leverage the latest and most integrated technology WebCenter has to offer. This phase is much easier since the you already have the information model and design wire-frames to base the logic on, however there is still few considerations to pay attention to: Base the template on ADF and make only necessary exceptions to markup when required Leverage ADF design components for Tabs, Accordions and other similar components, this way the design in the content published areas will comply with other design areas based on custom ADF taskflows There is no performance impact when using meta data or region definition based data All data access regardless of type, metadata or xml data it can be accessed via the Content Presenter - Node. See below for applied examples on how to access data Access metadata property from Document - #{node.propertyMap['myProp'].value}myProp in this example can be for instance (dDocName, dDocTitle, xComments or any other available metadata) Access element data from data file xml - #{node.propertyMap['[Region Definition Name]:[Element name]'].asTextHtml}Region Definition Name is the expect region definition that the current data file is instantiatingElement name is the element value you like to grab from the data file I recommend you read following  useful post on content template topic:CMIS queries and template creation - note see instruction 9 for detailsStatic List template rendering For more information on templates:Single Item Content TemplateMulti Item Content TemplateExpression Language Internationalization Considerations When integrating content assets via content presenter you by now probably understand that the content item/data file is wired to the page, what is also pretty common at this stage is that the content item/data file only support one language since its not practical or business friendly to mix that into a complex structure. Therefore you will be left with a very common dilemma that you will have to either build a complete new portal for each locale, which is not an good option! However with little bit of information modeling and clear naming convention this can be addressed. Basically you can simply make sure that all content item/data file are named with a predictable naming convention like "Content1_EN" for the English rendition and "Content1_ES" for the Spanish rendition. This way through simple none complex customizations you will be able to dynamically switch the actual content item/data file just before rendering. By following proposed approach above you not only enable a simple mechanism for internationalized content you also preserve the functionality in the content presenter to support business accessible run-time publishing of information on existing and new pages. I recommend you read following useful post on Internationalization topics:Internationalize with Content Presenter Integrate with Review & Approval processes Today the Review and approval functionality and configuration is based out of WebCenter Content - Criteria Workflows. Criteria Workflows uses the metadata of the checked in document to evaluate if the document is under any review/approval process. So for instance if a Criteria Workflow is configured to force any documents with Version = "2" or "higher" and Content Type is "Instructions", any matching content item version on check in will now enter the workflow before getting released for general access. Few things to consider when configuring Criteria Workflows: Make sure to not trigger on version one for Content Items that are Data Files - if you trigger on version 1 you will not only approve an empty document you will also have a content presenter pointing to a none existing document - since the document will only be available after successful completion of the workflow Approval workflows sometimes requires more complex criteria, the recommendation if that is the case is that the meta data triggering such criteria is automatically populated, this can be achieved through many approaches including Content Profiles Criteria workflows are configured and managed in WebCenter Content Administration Applets where you can configure one or more workflows. When you configured Criteria workflows the Content Presenter will support the editors with the approval process directly inline in the "Contribution mode" of the portal. In addition to approve/reject and details of the task, the content presenter natively support the user to view the current and future version of the change he/she is approving. See below for example: Architectural recommendation To support review&approval processes - minimize the amount of data files per page Each CMIS query can consume significant time depending on the complexity of the query - minimize the amount of CMIS queries per page Use Content Presenter Templates based on ADF - this way you minimize the design considerations and optimize the usage of caching Implement the page in as few Data files as possible - simplifies publishing process, increases performance and simplifies release process Named data file (node) or list of named nodes when integrating to pages increases performance vs. querying for data Named data file (node) or list of named nodes when integrating to pages enables business centric page creation and publishing and reduces the need for IT department interaction Summary Just because one architectural decision solves a business problem it doesn't mean its the right one, when designing portals all architecture has to be in harmony and not impacting each other. For instance the most technical complex solution is not always the best since it will most likely defeat the business accessibility, performance or both, therefore the best approach is to first design for simplicity that even a non-technical user can operate, after that consider the performance impact and final look at the technology challenges these brings and workaround them first with out-of-the-box features, after that design and develop functions to complement the short comings.

    Read the article

  • UK Partner Briefing – Business Analytics - 24 Sept 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Monday 24th September 2012 - Oracle City Office, London Register Here for this important, free briefing Oracle Partners are invited to attend this Business Analytics Partner Briefing on 24th September 2012 in Oracle’s London Moorgate Offices with Particular focus on Exalytics, Endeca and, BI Mobile. Who should attend? Oracle Business Analytics is one of our fastest growing product lines, hence this briefing will be of value to any executives looking for new business opportunities or extending their existing Analytics line of business Background This half day event will inform you of Oracle's Business Analytics strategy, how your organisation can gain commercial advantage from reselling and deploying Oracle's BI portfolio, and the tools and resources to support your sales engagements. Agenda 13:45 – Registration, Coffee, and iPad set up 14:30 – Briefing Commences: Welcome & Introduction to the Business Analytics FY13 Strategy from Mike Pell, VP UK Business Analytics Sales 15:15 – Exalytics: Speed of Thought Analytics 16:00 – Mobile BI & Endeca 16:45 – Event Wrap-up and Q&A 17:00 – Meet the UK BI Sales Team: Networking Please note – If you have an iPad please bring it with you to the session, as we will be helping to set these up with BI Mobile from 13:45 onwards. Click here to register now for this briefing for Oracle Partners. Best regards, Mike Pell                                  Duncan Fitter                           Mike Thompson VP UK Analytics Sales             BA Business Development       Alliances & Channels

    Read the article

  • Recent ALM Rangers Summary Posts

    - by Enrique Lima
    Willy-Peter Schaub has been a machine producing and posting content.  It is such a gem of information that you can find in the content being posted.  He has created some Summary Posts on the specific topics the ALM Rangers are working on. Here is the list of quick access TOC Posts. TOC: “Tags” a la acronyms … what do they all mean? TOC: TFS Integration Tools Blog Posts and Reference Sites TOC: TFS Iteration Automation Blog Posts and Reference Sites TOC: Virtual Machine (VM) Factory TOC: Build Customization Guide Blog Posts and Reference Sites TOC: Lab Management Guide Blog Posts and Reference Sites

    Read the article

  • New whitepaper, “Why Oracle Sun ZFS Storage Appliance for Oracle Databases?” now available.

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Databases are the backbone of today’s modern business providing transaction integrity for key business systems such as payment engines or providing the core of analytical data for decision-making. These diverse use cases require a flexible, high performance and highly available storage platform. The ZFS Storage Appliance is ideally suited with its architecture providing a platform flexible enough to meet the ever-changing availability, capacity and performance requirements from the business. In this just published white paper the authors provide both business and technical evidence of the suitability of the Oracle ZSF Storage Appliance as primary storage for Oracle Database 11gR2 environments. Click here to download the whitepaper.

    Read the article

  • Persisting settings without using Options dialog in Visual Studio

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2013/11/02/persisting-settings-without-using-options-dialog-in-visual-studio.aspxIn one of my previous blog post we have seen persisting settings using Visual Studio's options dialog. Visual Studio options has many advantages in automatically persisting user options for you. However, during our latest Team Rooms extension development, we decided to provide our users; ability to use our preferences directly from Team Explorer. The main reason was that we had only one simple option for user and we thought it is cumbersome for user to go to Tools –> Options dialog to change this. Another reason was, we wanted to highlight this setting to user as soon as he is using our extension.   So if you are in such a scenario where you do not want to use VS options window, but still would like to persist the settings, this post will guide you through. Visual Studio SDK provides two ways to persist settings in your extensions. One is using DialogPage as shown in my previous post. Another way is to use by implementing IProfileManager interface which I will explain in this post. Please note that the class implementing IProfileManager should be independent class. This is because, VS instantiates this class during Tools –> Import and Export Settings. IProfileManager provides 2 different sets of methods (total 4 methods) to persist the settings. They are LoadSettingsFromXml and SaveSettingsToXml – Implement these methods to persist settings to disk from VS settings storage. The VS will persist your settings along with other options to disk. LoadSettingsFromStorage and SaveSettingsToStorage – Implement these methods to persist settings to local storage, usually it be registry. VS calls LoadSettingsFromStorage method when it is initializing the package too. We are going to use the 2nd set of methods for this example. First, we are creating a separate class file called UserOptions.cs. Please note that, we also need to implement IComponent, which can be done by inheriting Component along with IProfileManager. [ComVisible(true)] [Guid("XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX")] public class UserOptions : Component, IProfileManager { private const string SUBKEY_NAME = "TForVS2013"; private const string TRAY_NOTIFICATIONS_STRING = "TrayNotifications"; ... } Define the property so that it can be used to set and get from other classes. public bool TrayNotifications { get; set; } Implement the members of IProfileManager. public void LoadSettingsFromStorage() { RegistryKey reg = null; try { using (reg = Package.UserRegistryRoot.OpenSubKey(SUBKEY_NAME)) { if (reg != null) { // Key already exists, so just update this setting. TrayNotifications = Convert.ToBoolean(reg.GetValue(TRAY_NOTIFICATIONS_STRING, true)); } } } catch (TeamRoomException exception) { TrayNotifications = true; ExceptionReporting.Report(exception); } finally { if (reg != null) { reg.Close(); } } } public void LoadSettingsFromXml(IVsSettingsReader reader) { reader.ReadSettingBoolean(TRAY_NOTIFICATIONS_STRING, out _isTrayNotificationsEnabled); TrayNotifications = (_isTrayNotificationsEnabled == 1); } public void ResetSettings() { } public void SaveSettingsToStorage() { RegistryKey reg = null; try { using (reg = Package.UserRegistryRoot.OpenSubKey(SUBKEY_NAME, true)) { if (reg != null) { // Key already exists, so just update this setting. reg.SetValue(TRAY_NOTIFICATIONS_STRING, TrayNotifications); } else { reg = Package.UserRegistryRoot.CreateSubKey(SUBKEY_NAME); reg.SetValue(TRAY_NOTIFICATIONS_STRING, TrayNotifications); } } } catch (TeamRoomException exception) { ExceptionReporting.Report(exception); } finally { if (reg != null) { reg.Close(); } } } public void SaveSettingsToXml(IVsSettingsWriter writer) { writer.WriteSettingBoolean(TRAY_NOTIFICATIONS_STRING, TrayNotifications ? 1 : 0); } Let me elaborate on the method implementation. The Package class provides UserRegistryRoot (which is HKCU\Microsoft\VisualStudio\12.0 for VS2013) property which can be used to create and read the registry keys. So basically, in the methods above, I am checking if the registry key exists already and if not, I simply create it. Also, in case there is an exception I return the default values. If the key already exists, I update the value. Also, note that you need to make sure that you close the key while exiting from the method. Very simple right? Accessing and settings is simple too. We just need to use the exposed property. UserOptions.TrayNotifications = true; UserOptions.SaveSettingsToStorage(); Reading settings is as simple as reading a property. UserOptions.LoadSettingsFromStorage(); var trayNotifications = UserOptions.TrayNotifications; Lastly, the most important step. We need to tell Visual Studio shell that our package exposes options using the UserOptions class. For this we need to decorate our package class with ProvideProfile attribute as below. [ProvideProfile(typeof(UserOptions), "TForVS2013", "TeamRooms", 110, 110, false, DescriptionResourceID = 401)] public sealed class TeamRooms : Microsoft.VisualStudio.Shell.Package { ... } That's it. If everything is alright, once you run the package you will also see your options appearing in "Import Export settings" window, which allows you to export your options.

    Read the article

  • O&rsquo;Reilly Deal of the Day 10/June/2014 - AngularJS Directives

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/10/orsquoreilly-deal-of-the-day-10june2014---angularjs-directives.aspxToday’s half-price E-Book offer from O’Reilly at http://shop.oreilly.com/product/9781783280339.do is AngularJS Directives. “AngularJS, propelled by Google, is quickly becoming one of the most popular JavaScript MVC frameworks available, working to invert the development paradigm and bring data-driven modularity to the web frontend. Directives serve as the core building blocks in AngularJS and enable you to create reusable models that mold around your data structures and breathe new life into the intersection of HTML and JavaScript.”

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • Sevensteps and I are joining forces

    - by Dennis Vroegop
    As of today, I will be partnering with Sevensteps when it comes to developing great Surface, Windows Phone 7 and Windows 7 Touch applications. Below you’ll find the press release we sent out today. I am looking forward to this partnership and expect great things coming from us both in the future!   Dennis Vroegop, Microsoft MVP, joins Sevensteps partner network 1 March 2011, Seattle / Amersfoort Today Dennis Vroegop and Bart Roozendaal, both Microsoft Most Valuable Professional for Microsoft Surface, announce the joining of Dennis Vroegop to the Sevensteps partner network. Dennis and Bart already worked together very closely through the Microsoft MVP connection, but decided to combined their efforts to make the new Microsoft Surface and our solutions for it, a success. Dennis will join the other Sevensteps partners in creating state of the art solutions for Microsoft Surface, Windows Phone 7 and Windows 7 Touch. Dennis brings a vast amount of knowledge about these technologies, as well as his network in the Dutch developer community. With Dennis joining the Sevensteps partner network we bring unique expertise, power and insight in the platforms, that no other company worldwide can offer. This step brings our goal of Sevensteps being the knowledge hub for Microsoft Surface of choice a whole lot closer. About Dennis Vroegop Dennis is a Microsoft MVP for Microsoft Surface and chairman of the Dutch dotNed user group. He has a long history promoting Microsoft Surface in the developer community. Dennis is a regular speaker at local and international conferences and a frequent writer of articles, including but not limited to Microsoft Surface. Dennis has a bachelor’s degree in computer sciences and has spent all of his professional life writing software for the Microsoft platform. About Sevensteps For more information about Sevensteps and Bart Roozendaal please point to http://www.sevensteps.com Tags: surface,wp7,windows touch

    Read the article

  • Is Microsoft&rsquo;s Cloud Bet Placed on the Ground?

    - by andrewbrust
    Today at the Unversity of Washington, Steve Ballmer gave a speech on Microsoft’s cloud strategy.  Significantly, Azure was only briefly mentioned and was not shown.  Instead, Ballmer spoke about what he called the five “dimensions” of the cloud, and used that as the basis for an almost philosophical discussion.  Ballmer opined on how the cloud should be distinguished from the Internet.as well as what the cloud will and should enable.  Ballmer worked hard to portray the cloud not as a challenger to Windows and PCs (as Google would certainly suggest it is) but  really as just the latest peripheral that adds value to PCs and devices. At one point during his speech, Ballmer said “We start with Windows at Microsoft.  It’s the most popular smart device on the planet.  And our design center for the future of Windows is to make it one of those smarter devices that the cloud really wants.”  I’m not sure I agree with Ballmer’s ambition here, but I must admit he’s taken the “software + services” concept and expanded on it in more consumer-friendly fashion. There were demos too.  For example, Blaise Aguera y Arcas reprised his Bing Maps demo from the TED conference held last month.  And Simon Atwell showed how Microsoft has teamed with Sky TV in the UK to turn Xbox into something that looks uncannily like Windows Media Center.  Specifically, an Xbox console app called Sky Player provides full access to Sky’s on-demand programming but also live TV access to an array of networks carried on its home TV service, complete with an on-screen programming guide.  Windows Phone 7 Series was shown quickly and Ballmer told us that while Windows Mobile/Phone 6.5 and earlier were designed for voice and legacy functionality, Windows Phone 7 Series is designed for the cloud. Over and over during Ballmer’s talk (and those of his guest demo presenters), the message was clear: Microsoft believes that client (“smart”) devices, and not mere HTML terminals, are the technologies to best deliver on the promise of the cloud.  The message was that PCs running Windows, game consoles and smart phones  whose native interfaces are Internet-connected offer the most effective way to utilize cloud capabilities.  Even the Bing Maps demo conveyed this message, because the advanced technology shown in the demo uses Silverlight (and thus the PCs computing power), and not AJAX (which relies only upon the browser’s native scripting and rendering capabilities) to produce the impressive interface shown to the audience. Microsoft’s new slogan, with respect to the cloud, is “we’re all in.”  Just as a Texas Hold ‘em player bets his entire stash of chips when he goes all in, so too is Microsoft “betting the company” on the cloud.  But it would seem that Microsoft’s bet isn’t on the cloud in a pure sense, and is instead on the power of the cloud to fuel new growth in PCs and other client devices, Microsoft’s traditional comfort zone.  Is that a bet or a hedge?  If the latter, is Microsoft truly all in?  I don’t really know.  I think many people would say this is a sucker’s bet.  But others would say it’s suckers who bet against Microsoft.  No matter what, the burden is on Microsoft to prove this contrarian view of the cloud is a sensible one.  To do that, they’ll need to deliver on cloud-connected device innovation.  And to do that, the whole company will need to feel that victory is crucial.  Time will tell.  And I expect to present progress reports in future posts.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >