Search Results

Search found 21220 results on 849 pages for 'oracle events'.

Page 382/849 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • WPF EventRouting to Children

    - by Shaun Bowe
    Is there any way to broadcast a RoutedEvent to all of my children in WPF? For example lets say I have a Window that has 4 children. 2 of them know about the RoutedEvent 'DisplayYourself' and are listening for it. How can I raise this event from the window and have it sent to all children? I looked at RoutingStrategy and Bubble is the wrong direction, Tunnel and Direct don't work because I don't know which children I want to send this to. I just want to broadcast this message and have whoever cares about it handle it. update: I declared the events in a static class. public static class StaticEventClass { public static readonly RoutedEvent ClickEvent = EventManager.RegisterRoutedEvent("Click", RoutingStrategy.Bubble, typeof(RoutedEventHandler), typeof(StaticEventClass)); public static readonly RoutedEvent DrawEvent = EventManager.RegisterRoutedEvent("Draw", RoutingStrategy.Bubble, typeof(RoutedEventHandler), typeof(StaticEventClass)); } The problem is when I raise the event from my window, the children never see it. RoutedEventArgs args = new RoutedEventArgs(StaticEventClass.DrawEvent, this); this.RaiseEvent(args); update again.. Here is the handler in the child. public ChildClass() { this.AddHandler(StaticEventClass.DrawEvent, new RoutedEventHandler(ChildClass_Draw)); }

    Read the article

  • Pass an Event as parameter

    - by dkson
    I have a class which bundles events and registers controls to easily register/unregister them, it's basically used this way: Private Sub MyFocusHandler(ByVal sender As Object, ...) ... End Sub ... Dim b1 = new MyTextBox() Dim b2 = new MyTextBox() .... 'Lot of Controls' Dim cr = new ControlRegistration() cr.RegisterControl(b1) cr.RegisterControl(b2) .... 'Register a lot of controls' cr.RegisterEvent("Focus",New EventHandler(AddressOf MyFocusHandler)) cr.RegisterEvent("Validate",New EventHandler(AddressOf MyValidateHandler)) So I don't have to add the handlers manually for each control. The ControlRegistration-Class will cycle through the list of registered controls and check if a control has a registered event and then attach the eventhandler, something like: ... For Each control in contols Dim ev_info As Reflection.EventInfo = _ control.GetType().GetEvent(.GetType().GetEvent(event_name)) If Not (ev_info Is Nothing) Then ev_info.AddEventHandler ... ... My problem is that i am identifying the event by a string Public Sub RegisterEvent(ByVal eventName As String, ByVal handler As [Delegate]) ... ... cr.RegisterEvent("Focus",New EventHandler(AddressOf MyFocusHandler)) I don't want to depend on the name of the event, since it is possibly that the name changes, and then things will break up. Is there a way I can do this like: Public Sub RegisterEvent(ByVal theEvent As ???, ByVal handler As [Delegate]) ... ... cr.RegisterEvent(IMyControl.MyEvent,New EventHandler(AddressOf MyEventHandler)) I hope it is clear what I want to archive. Anybody any ideas? Thanks

    Read the article

  • Dynamically created controls and the ASP.NET page lifecycle

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • Unit testing that an event is raised in C#, using reflection

    - by Thomas
    I want to test that setting a certain property (or more generally, executing some code) raises a certain event on my object. In that respect my problem is similar to http://stackoverflow.com/questions/248989/unit-testing-that-an-event-is-raised-in-c, but I need a lot of these tests and I hate boilerplate. So I'm looking for a more general solution, using reflection. Ideally, I would like to do something like this: [TestMethod] public void TestWidth() { MyClass myObject = new MyClass(); AssertRaisesEvent(() => { myObject.Width = 42; }, myObject, "WidthChanged"); } For the implementation of the AssertRaisesEvent, I've come this far: private void AssertRaisesEvent(Action action, object obj, string eventName) { EventInfo eventInfo = obj.GetType().GetEvent(eventName); int raisedCount = 0; Action incrementer = () => { ++raisedCount; }; Delegate handler = /* what goes here? */; eventInfo.AddEventHandler(obj, handler); action.Invoke(); eventInfo.RemoveEventHandler(obj, handler); Assert.AreEqual(1, raisedCount); } As you can see, my problem lies in creating a Delegate of the appropriate type for this event. The delegate should do nothing except invoke incrementer. Because of all the syntactic syrup in C#, my notion of how delegates and events really work is a bit hazy. How to do this?

    Read the article

  • Are there pitfalls to using static class/event as an application message bus

    - by Doug Clutter
    I have a static generic class that helps me move events around with very little overhead: public static class MessageBus<T> where T : EventArgs { public static event EventHandler<T> MessageReceived; public static void SendMessage(object sender, T message) { if (MessageReceived != null) MessageReceived(sender, message); } } To create a system-wide message bus, I simply need to define an EventArgs class to pass around any arbitrary bits of information: class MyEventArgs : EventArgs { public string Message { get; set; } } Anywhere I'm interested in this event, I just wire up a handler: MessageBus<MyEventArgs>.MessageReceived += (s,e) => DoSomething(); Likewise, triggering the event is just as easy: MessageBus<MyEventArgs>.SendMessage(this, new MyEventArgs() {Message="hi mom"}); Using MessageBus and a custom EventArgs class lets me have an application wide message sink for a specific type of message. This comes in handy when you have several forms that, for example, display customer information and maybe a couple forms that update that information. None of the forms know about each other and none of them need to be wired to a static "super class". I have a couple questions: fxCop complains about using static methods with generics, but this is exactly what I'm after here. I want there to be exactly one MessageBus for each type of message handled. Using a static with a generic saves me from writing all the code that would maintain the list of MessageBus objects. Are the listening objects being kept "alive" via the MessageReceived event? For instance, perhaps I have this code in a Form.Load event: MessageBus<CustomerChangedEventArgs>.MessageReceived += (s,e) => DoReload(); When the Form is Closed, is the Form being retained in memory because MessageReceived has a reference to its DoReload method? Should I be removing the reference when the form closes: MessageBus<CustomerChangedEventArgs>.MessageReceived -= (s,e) => DoReload();

    Read the article

  • What's keeping this timer in scope? The anonymous method?

    - by Andy
    Ok, So I have a method which fires when someone clicks on our Icon in a silverlight application, seen below: private void Logo_MouseLeftButtonUp(object sender, MouseButtonEventArgs e) { e.Handled = true; ShowInfo(true); DispatcherTimer autoCloseTimer = new DispatcherTimer(); autoCloseTimer.Interval = new TimeSpan(0, 0, 10); autoCloseTimer.Tick +=new EventHandler((timerSender,args) => { autoCloseTimer.Stop(); ShowInfo(false); }); autoCloseTimer.Start(); } Whats meant to happen is that the method ShowInfo() opens up a box with the company info in and the dispatch timer auto closes it after said timespan. And this all works... But what I'm not sure about is because the dispatch timer is a local var, after the Logo_MouseLeftButtonUp method finishes, what is there to keep the dispatch timer referenced and not availible for GC collection before the anonymous method is fired? Is it the reference to the ShowInfo() method in the anonymous method? Just feels like some thing I should understand deeper as I can imagine with using events etc it can be very easy to create a leak with something like this. Hope this all makes sense! Andy.

    Read the article

  • Checking whether images loaded after page load

    - by johkar
    Determining whether an image has loaded reliably seems to be one of the great JavaScript mysteries. I have tried various scripts/script libraries which check the onload and onerror events but I have had mixed and unreliable results. Can I reliably just check the complete property (IE 6-8 and Firefox) as I have done in the script below? I simply have a page wich lists out servers and I link to an on.gif on each server. If it doesn't load I just want to load an off.gif instead. This is just for internal use...I just need it to be reliable in showing the status!!! <script type="text/javascript"> var allimgs = document.getElementsByTagName('img'); function checkImages(){ for (i = 0; i < allimgs.length; i++){ var result = Math.random(); allimgs[i].src = allimgs[i].src + '?' + result; } serverDown(); setInterval('serverDown()',5000); } window.onload=checkImages; function serverDown(){ for (i = 0; i < allimgs.length; i++){ var imgholder=new Image(); imgholder.src=allimgs[i].src; if(!allimgs[i].complete){ allimgs[i].src='off.gif'; } } } </script>

    Read the article

  • Excel automation: Close event missing

    - by chiccodoro
    Another hi all, I am doing Excel automation via Interop in C#, and I want to be informed when a workbook is closed. However, there is no Close event on the workbook nor a Quit event on the application. Has anybody done that before? How can I write a piece of code which reacts to the workbook being closed (which is only executed if the workbook is really closed)? Ideally that should happen after closing the workbook, so I can rely on the file to reflect all changes. Details about what I found so far: There is a BeforeClose() event, but if there are unsaved changes this event is raised before the user being asked whether to save them, so at the moment I can process the event, I don't have the final file and I cannot release the COM objects, both things that I need to have/do. I do not even know whether the workbook will actually be closed, since the user might choose to abort closing. Then there is a BeforeSave() event. So, if the user chooses "Yes" to save unsaved changes, then BeforeSave() is executed after BeforeClose(). However, if the user chooses to "Abort", then hits "file-save", the exact same order of events is executed. Further, if the user chooses "No", the BeforeSave() isn't executed at all. The same holds as long as the user doesn't click any of these options.

    Read the article

  • How do I wait for a C# event to be raised?

    - by Evan Barkley
    I have a Sender class that sends a Message on a IChannel: public class MessageEventArgs : EventArgs { public Message Message { get; private set; } public MessageEventArgs(Message m) { Message = m; } } public interface IChannel { public event EventHandler<MessageEventArgs> MessageReceived; void Send(Message m); } public class Sender { public const int MaxWaitInMs = 5000; private IChannel _c = ...; public Message Send(Message m) { _c.Send(m); // wait for MaxWaitInMs to get an event from _c.MessageReceived // return the message or null if no message was received in response } } When we send messages, the IChannel sometimes gives a response depending on what kind of Message was sent by raising the MessageReceived event. The event arguments contain the message of interest. I want Sender.Send() method to wait for a short time to see if this event is raised. If so, I'll return its MessageEventArgs.Message property. If not, I return a null Message. How can I wait in this way? I'd prefer not to have do the threading legwork with ManualResetEvents and such, so sticking to regular events would be optimal for me.

    Read the article

  • Disabling repeating keyboard down event in as3

    - by psy-sci
    now I'm trying to make the keyboard events to stop repeating. My idea was to have a true and false condition for when the key is pressed so that it wont repeat if the key is down already. //Mouse Event Over keyCButton.addEventListener(MouseEvent.MOUSE_OVER, function(){gotoAndStop(2)}); //Variable var Qkey:uint = 81; //Key Down Event stage.addEventListener(KeyboardEvent.KEY_DOWN, keydown); var soundplayed = false; function keydown(event:KeyboardEvent){ if (event.keyCode==Qkey) { this.soundplayed=true;} } if (this.soundplayed==false){ gotoAndPlay(3); } else {} //Key Up Event stage.addEventListener(KeyboardEvent.KEY_UP, keyup); function keyup(event:KeyboardEvent){ if (event.keyCode==Qkey) { this.soundplayed=true; gotoAndStop(1); } } doing this just turns off the key event I think i need to add a "&& keyDown..." to "if (this.soundplayed==true)" but i dont know how to do it without getting errors here is the keyboard player i'm trying to fix http://soulseekrecords.org/psysci/animation/piano.html

    Read the article

  • how can i check ID of a clicked element js

    - by necker
    how can i check if an ID of a clicked element is lets say 'target'. what i am trying to do is actually show and hide comment form on clicking in the text field and hide it when the user clicks out of the form. the problem is that if the user clicks Submit button the form hides and nothing is sent over. so i'll have to check if the submit buttons id matches the clicked element and not hide it in this case. i am using ruby on rails remote_form_for and onblur and onfocus events now. this is my bigger form that i am showing. <div id="bigArea" style="display:none"> <% remote_form_for @horses do |f|%> <%= f.text_area :description, {:onBlur=>"{$(bigArea').hide();$('smallField').show();}"} %> <%= f.submit "Submit"%> <% end %> </div> and this is the smaller form field that hides everytime you click in it. <div id="smallField"> <%= text_field_tag 'sth',"Click to comment, {:onFocus=>"$('bigArea').show();$('smallField').hide();"} %> </div> My question is how can i disallow the form to hide when a user clicks submit button? i suppose i should check which element's id has been clicked. and if it's submit button's ID i should not hide the form. Or maybe there is some other way to do all this? i would greatly appreciate any answers!

    Read the article

  • How to prevent the user from leaving the page

    - by JohnathanKong
    Hey Everyone, I am currently building a registration site where if the user leaves, I want to pop up a CSS box asking him if he is sure or not. I can accomplish this feat using confirm boxes, but the client says that they are too ugly. I've tried using unload and beforeunload, but both cannot stop the page from being redirected. Using those to events, I return false, so maybe there's a way to cancel other than returning false? Another solution that I've had was redirecting them to another page that has my popup, but the problem with that is that if they do want to leave the page, and it wasn't a mistake, they lose the page they were originally trying to go to. If I was a user, that would irritate me. The last solution was real popup window. The only thing I don't like about that is that the main winow will have their destination page while the pop will have my page. In my opinion it looks disjoint. On top of that, I'd be worried about popup blockers.

    Read the article

  • HTML5 Video Element on iPad doesn't fire onclick?

    - by bhups
    I am using the video element in my HTML as following:<div id="container" style="background:black; overflow:hidden;width:320px;height:240px" <video style="background:black;display:block" id="vdo" height="240px" width="320px" src="http://mydomain/vid.mp4"</video</div And in javascript I am doing this:var video=document.getElementById('vdo'); var container=document.getElementById('container'); video.addEventListener('click', function(e) { e.preventDefault(); console.log("clicked"); }, false); container.addEventListener('click', function(e) { e.preventDefault(); console.log("clicked"); }, false); On desktop safari/chrome everything is working fine. I can see two "clicked" in the console. But on ipad there is nothing. First I tried with iOS versin 3.2, then I updated it to the latest one 4.2.1 without any success.I found a similar question HTML5 Video Element on iPad doesn't fire onclick or touchstart events? where it suggests not to use controls in video tag and I am not using it.

    Read the article

  • Why Backbone.js isn't binding my event

    - by Saif Bechan
    I have a router like this, as main entry point: window.AppRouter = Backbone.Router.extend({ routes: { '': 'login' }, login: function(){ userLoginView = new UserLoginView(); } }); var appRouter = new AppRouter; Backbone.history.start({pushState: true}); I have a model/collection/view like this: window.User = Backbone.Model.extend({}); window.Users = Backbone.Collection.extend({ model: User }); window.UserLoginView = Backbone.View.extend({ events: { 'click #login-button': 'loginAction' }, initialize: function(){ _.bindAll(this, 'render', 'loginAction'); }, loginAction: function(){ var uid = $("#login-username").val(); var pwd = $("#login-password").val(); var user = new User({uid:uid, pwd:pwd}); } }); And body of my HTML looks like this: <form action="#" method="POST" id="login-form"> <p> <label for="login-username">username</label> <input type="text" id="login-username" autofocus /> </p> <p> <label for="login-password">password</label> <input type="password" id="login-password" /> </p> <a id="login-button" href="#">Inloggen</a> </form> Note: The HTML comes from Node.js using express.js, should I maybe wait for a document ready event somewhere? Edit: I have tried this, create the view when ready, did not solve the problem. $(function(){ userLoginView = new UserLoginView(); });

    Read the article

  • no longer an issue

    - by MrTemp
    I am still new to c# and wpf This program is a clock with different view and I would like to use the context menu to change between view, but the error says that there is no definition or extension method for the events. Right now I have the event I'm working on popping up a MessageBox just so I know it has run, but I cannot get it to compile. public partial class MainWindow : NavigationWindow { public MainWindow() { //InitializeComponent(); } public void AnalogMenu_Click(object sender, RoutedEventArgs e) { /*AnalogClock analog = new AnalogClock(); this.NavigationService.Navigate(analog);*/ } public void DigitalMenu_Click(object sender, RoutedEventArgs e) { MessageBox.Show("Digital Clicked"); /*DigitalClock digital = new DigitalClock(); this.NavigationService.Navigate(digital);*/ } public void BinaryMenu_Click(object sender, RoutedEventArgs e) { /*BinaryClock binary = new BinaryClock(); this.NavigationService.Navigate(binary);*/ } } and the xaml call if you want it <NavigationWindow.ContextMenu> <ContextMenu Name="ClockMenu" > <MenuItem Name="ToAnalog" Header="To Analog" ToolTip="Changes to an analog clock"/> <MenuItem Name="ToDigital" Header="To Digital" ToolTip="Changes to a digital clock" Click="DigitalMenu_Click" /> <MenuItem Name="ToBinary" Header="To Binary" ToolTip="Changes to a binary clock"/> </ContextMenu> </NavigationWindow.ContextMenu>

    Read the article

  • Event consumption in WPF

    - by webaloman
    I have a very simple app written in Silverlight for Windows Phone, where I try to use events. In my App.xaml.cs code behind I have implemented a GeoCoordinateWatcher which registers a gCWatche_PositionChanged method. This works ok, method is called after the position has been changed. What I want to do is fire an other event lets say DBUpdatedEvent after DB has been updated in the gCWatche_PositionChanged method. For this i delclared in the App.xaml.cs public delegate void DBUpdateEventHandler(object sender, EventArgs e); and I have in my App class: public event DBUpdateEventHandler DBUpdated; the event is fired like this in the end of gCWatche_PositionChanged method like this: OnDBUpdateEvent(new EventArgs()); and also I have declared : protected virtual void OnDBUpdateEvent(EventArgs e) { if (DBUpdated != null) { DBUpdated(this, e); } } Now I need to consume this event in my other Windows Phone app page which is a separate class PhoneApplicationPage. So I declared this method in this other Phone Page: public void DBHasBeenUpdated(object sender, EventArgs e) { Debug.WriteLine("DB UPDATE EVENT CATCHED"); } And in the constructor of this page I declared: DBUpdateEventHandler dbEH = new DBUpdateEventHandler(DBHasBeenUpdated); But when I test the application event is fired (OnDBUpdateEvent is called, but DBUpdated is null, therefore DBUpdated is not called - strange) and I have a problem that the other Phone Page is not catching the event at all... Any suggestions? How to catch that event. Thanks.

    Read the article

  • WPF Event Handler in Another Class

    - by Nathan Tornquist
    I have built a series of event handlers for some custom WPF controls. The event handles format the text displayed when the user enters or leaves a textbox based on the type of data contained (Phone number, zip code, monetary value, etc.) Right now I have all of the events locally in the C# code directly attached to the xaml. Because I have developed a could controls, this means that the logic is repeated a lot, and if I want to change the program-wide functionality I would have to make changes everywhere the event code is located. I am sure there is a way to put all of my event handlers in a single class. Can anyone help point me in the correct direction? I saw this article: Event Handler located in different class than MainWindow But I'm not sure if it directly relates to what I'm doing. I would rather make small changes to the existing logic that I have, as it works, then rewrite everything into commands. I would essentially like to something like this if possible: LostFocus="ExpandedTextBoxEvents.TextBox_LostFocus" It is easy enough to do something like this: private void TextBoxCurrencyGotFocus(object sender, RoutedEventArgs e) { ExpandedTextBoxEvents.TextBoxCurrencyGotFocus(sender, e); } private void TextBoxCurrencyLostFocus(object sender, RoutedEventArgs e) { ExpandedTextBoxEvents.TextBoxCurrencyLostFocus(sender, e); } But that is less elegant.

    Read the article

  • How come the Actionscript 3 ENTER_FRAME event is crazy nuts?

    - by nstory
    So, I've been toying around with Flash, browsing through the documentation, and all that, and noticed that the ENTER_FRAME event seems to defy my expectation of a deterministic universe. Take the following example: (new MovieClip()).addEventListener(Event.ENTER_FRAME, function(ev) {trace("Test");}); Notice this anonymous MovieClip is not added to the display hierarchy, and any reference to it is immediately lost. It will actually print "Test" once a frame until it is garbage collected. How insane is that? The behavior of this is actually determined by when the garbage collector feels like coming around in all its unpredictable insanity! Is there a better way to create intermittent failures? Seriously. My two theories are that either the DisplayObject class stores weak references to all its instances for the purpose of dispatching ENTER_FRAME events, or, and much wilder, the Flash player actually scans the heap each frame looking for ENTER_FRAME listeners to pull on. Can any hardened Actionscript developer clue me in on how this works? (And maybe a why - the - f**k they thought this was a good idea?)

    Read the article

  • How to avoid raising an event to a closed form?

    - by Steve Dignan
    I'm having trouble handling the scenario whereby an event is being raised to a closed form and was hoping to get some help. Scenario (see below code for reference): Form1 opens Form2 Form1 subscribes to an event on Form2 (let's call the event FormAction) Form1 is closed and Form2 remains open Form2 raises the FormAction event In Form1.form2_FormAction, why does this return a reference to Form1 but button1.Parent returns null? Shouldn't they both return the same reference? If we were to omit step 3, both this and button1.Parent return the same reference. Here's the code I'm using... Form1: public partial class Form1 : Form { public Form1 () { InitializeComponent(); } private void button1_Click ( object sender , EventArgs e ) { // Create instance of Form2 and subscribe to the FormAction event var form2 = new Form2(); form2.FormAction += form2_FormAction; form2.Show(); } private void form2_FormAction ( object o ) { // Always returns reference to Form1 var form = this; // If Form1 is open, button1.Parent is equal to form/this // If Form1 is closed, button1.Parent is null var parent = button1.Parent; } } Form2: public partial class Form2 : Form { public Form2 () { InitializeComponent(); } public delegate void FormActionHandler ( object o ); public event FormActionHandler FormAction = delegate { }; private void button1_Click ( object sender , EventArgs e ) { FormAction( "Button clicked." ); } } Ideally, I would like to avoid raising events to closed/disposed forms (which I'm not sure is possible) or find a clean way of handling this in the caller (in this case, Form1). Any help is appreciated.

    Read the article

  • Generic Event Generator and Handler from User Supplied Types?

    - by JaredBroad
    I'm trying to allow the user to supply custom data and manage the data with custom types. The user's algorithm will get time synchronized events pushed into the event handlers they define. I'm not sure if this is possible but here's the "proof of concept" code I'd like to build. It doesn't detect T in the for loop: "The type or namespace name 'T' could not be found" class Program { static void Main(string[] args) { Algorithm algo = new Algorithm(); Dictionary<Type, string[]> userDataSources = new Dictionary<Type, string[]>(); // "User" adding custom type and data source for algorithm to consume userDataSources.Add(typeof(Weather), new string[] { "temperature data1", "temperature data2" }); for (int i = 0; i < 2; i++) { foreach (Type T in userDataSources.Keys) { string line = userDataSources[typeof(T)][i]; //Iterate over CSV data.. var userObj = new T(line); algo.OnData < typeof(T) > (userObj); } } } //User's algorithm pattern. interface IAlgorithm<TData> where TData : class { void OnData<TData>(TData data); } //User's algorithm. class Algorithm : IAlgorithm<Weather> { //Handle Custom User Data public void OnData<Weather>(Weather data) { Console.WriteLine(data.date.ToString()); Console.ReadKey(); } } //Example "user" custom type. public class Weather { public DateTime date = new DateTime(); public double temperature = 0; public Weather(string line) { Console.WriteLine("Initializing weather object with: " + line); date = DateTime.Now; temperature = -1; } } }

    Read the article

  • Upgrading Fusion Middleware 11.1.1.x to 11.1.1.4

    - by James Taylor
    This is a follow on from my previous post where we upgraded 11.1.1.2 to 11.1.1.3. The instructions I provide here will work for Fusion Middleware 11.1.1.2 and 11.1.1.3 wanting to upgrade to 11.1.1.4. In this example I’m just upgrading SOA Suite on OEL 64bit but the steps will be the same, some of the downloads may be different based on your environment. To upgrade to 11.1.1.4 you need to have access to http://support.oracle.com as this is where the downloads reside. Oracle provides 11.1.1.4 as a standalone download so you can do a fresh install if required using OTN downloads (http://www.oracle.com/technetwork/indexes/downloads/index.html). The high level steps to upgrade are as follows: Download software Shutdown you SOA Environment Upgrade WLS to 11.1.1.4 Upgrade SOA Suite to 11.1.1.4 Upgrade OSB to 11.1.1.4 Upgrade MSD Schemas Identify the downloads you require for your install. You will need the WebLogic Server Upgrade and the additional product downloads. If you are using 64bit then use the generic version. The downloads are found from the following location - http://download.oracle.com/docs/html/E18749_01/download_readme.htm#BABDDIIC For the purpose of this post I downloaded the following patches 11060985 – WLS Server Generic 11060960 – SOA Suite 11061005 – OSB Suite You must also download the 11.1.1.4 RCU tool to upgrade the DB schemas. It is available via OTN, or, Oracle Support, I have provided the link from Oracle Support.  11060956 – RCU Make sure you have set the Java executable in your PATH e.g. export PATH=$JAVA_HOME/bin:$PATH  Make sure all your WebLogic environment has been shut down before performing the upgrade. Extract the WLS patch 11060985 to a temporary directory and start the installer java –jar wls1034_upgrade_generic.jar Please note if you are not running 64BIT then the upgrade executable will be just a bin file which you can execute directly. Chose the right Oracle home for your WebLogic Server install. In the Register for Security Updates you can enter your details or just click Next. If you do not enter details confirm that you don’t want to receive these updates Select the products you want to upgrade and select next. It is recommended that you accept the defaults. Confirm the directories that will be upgraded Upgrade of WLS ahs been completed   Extract your both SOA downloads to a temporary directory and run the installer found in Disk1 ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Please note that the java location and version may be different for your environment Skip the Software Updates Ensure your system meets the prerequisites Set the Oracle home for your SOA install. You will be asked to confirm that you want to upgrade, click Yes Choose your application server. Since you are upgrading from 11.1.1.x you will be on WebLogic Start the Install Installation Upgrade of SOA Suite completed accept the default to finish.   In my environment I have OSB installed so I need to upgrade this next. If you don’t have SOA Suite you can go straight to completing the DB Schema updates at Step 24.  Extract the OSB upgrade files to a temporary directory and execute the installer found in the Disk1 folder. ./runInstaller -jreLoc /java/jdk1.6.0_20/jre Skip the software updates Select the Oracle home for your environment Accept the warning to continue the upgrade Point to the location of your WebLogic Server installation Install the OSB upgrade Upgrade has been completed accept the defaults Change directory to $MW_HOME/oracle_common/bin where the Patch Set Assistant is installed Execute the following command to update the MDS schema. Please not for my examples I have the context set to DEV. your may be different. This means that all my schemas are prefixed by DEV. ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_MDS You will be asked you passwords for sys and the schema Enter the database administrator password for "sys": Enter the schema password for schema user "DEV_MDS": Change directory to $MW_HOME/Oracle_SOA1/bin to where the Patch Set Assistant is installed for SOA Suite. Execute the following command to update the SOA and BAM schemas ./psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA   To check that you have the installed correctly run the following SQL as sysdba. SELECT owner, version, status FROM schema_version_registry; OWNER                          VERSION                        STATUS ------------------------------ ------------------------------ ----------- DEV_MDS                        11.1.1.4.0                     VALID DEV_SOAINFRA                   11.1.1.4.0                     VALID Don’t stress if the versions are not all sitting at version 11.1.1.4 as not all schemas need to be updated. The key ones are MDS and SOAINFRA

    Read the article

  • Windows in StreamInsight: Hopping vs. Snapshot

    - by Roman Schindlauer
    Three weeks ago, we explained the basic concept of windows in StreamInsight: defining sets of events that serve as arguments for set-based operations, like aggregations. Today, we want to discuss the so-called Hopping Windows and compare them with Snapshot Windows. We will compare these two, because they can serve similar purposes with different behaviors; we will discuss the remaining window type, Count Windows, another time. Hopping (and its syntactic-sugar-sister Tumbling) windows are probably the most straightforward windowing concept in StreamInsight. A hopping window is defined by its length, and the offset from one window to the next. They are aligned with some absolute point on the timeline (which can also be given as a parameter to the window) and create sets of events. The diagram below shows an example of a hopping window with length of 1h and hop size (the offset) of 15 minutes, hence creating overlapping windows:   Two aspects in this diagram are important: Since this window is overlapping, an event can fall into more than one windows. If an (interval) event spans a window boundary, its lifetime will be clipped to the window, before it is passed to the set-based operation. That’s the default and currently only available window input policy. (This should only concern you if you are using a time-sensitive user-defined aggregate or operator.) The set-based operation will be applied to each of these sets, yielding a result. This result is: A single scalar value in case of built-in or user-defined aggregates. A subset of the input payloads, in case of the TopK operator. Arbitrary events, when using a user-defined operator. The timestamps of the result are almost always the ones of the windows. Only the user-defined  operator can create new events with timestamps. (However, even these event lifetimes are subject to the window’s output policy, which is currently always to clip to the window end.) Let’s assume we were calculating the sum over some payload field: var result = from window in source.HoppingWindow( TimeSpan.FromHours(1), TimeSpan.FromMinutes(15), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { avg = window.Avg(e => e.Value) }; Now each window is reflected by one result event:   As you can see, the window definition defines the output frequency. No matter how many or few events we got from the input, this hopping window will produce one result every 15 minutes – except for those windows that do not contain any events at all, because StreamInsight window operations are empty-preserving (more about that another time). The “forced” output for every window can become a performance issue if you have a real-time query with many events in a wide group & apply – let me explain: imagine you have a lot of events that you group by and then aggregate within each group – classical streaming pattern. The hopping window produces a result in each group at exactly the same point in time for all groups, since the window boundaries are aligned with the timeline, not with the event timestamps. This means that the query output will become very bursty, delivering the results of all the groups at the same point in time. This becomes especially obvious if the events are long-lasting, spanning multiple windows each, so that the produced result events do not change their value very often. In such a case, a snapshot window can remedy. Snapshot windows are more difficult to explain than hopping windows: they represent those periods in time, when no event changes occur. In other words, if you mark all event start and and times on your timeline, then you are looking at all snapshot window boundaries:   If your events are never overlapping, the snapshot window will not make much sense. It is commonly used together with timestamp modification, which make it a very powerful tool. Or as Allan Mitchell expressed in in a recent tweet: “I used to look at SnapshotWindow() with disdain. Now she is my mistress, the one I turn to in times of trouble and need”. Let’s look at a simple example: I want to compute the average of some value in my events over the last minute. I don’t want this output be produced at fixed intervals, but at soon as it changes (that’s the true event-driven spirit!). The snapshot window will include all currently active event at each point in time, hence we need to extend our original events’ lifetimes into the future: Applying the Snapshot window on these events, it will appear to be “looking back into the past”: If you look at the result produced in this diagram, you can easily prove that, at each point in time, the current event value represents the average of all original input event within the last minute. Here is the LINQ representation of that query, applying the lifetime extension before the snapshot window: var result = from window in source .AlterEventDuration(e => TimeSpan.FromMinutes(1)) .SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { avg = window.Avg(e => e.Value) }; With more complex modifications of the event lifetimes you can achieve many more query patterns. For instance “running totals” by keeping the event start times, but snapping their end times to some fixed time, like the end of the day. Each snapshot then “sees” all events that have happened in the respective time period so far. Regards, The StreamInsight Team

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Is software support an option for your career?

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 If you have a technical background, why should you choose a career in support? We have invited Serban to answer these questions and to give us an overview of one of the biggest technical teams in Oracle Romania. He’s been with Oracle for 7 years leading the local PeopleSoft Financials & Supply Chain Support team. Back in 2013 Serban started building a new support team in Romania – Fusion HCM. His current focus is building a strong support team for Fusion HCM, latest solution for Business HR Professionals from Oracle. The solution is offered both on Premise (customer site installation) but more important as a Cloud offering – SaaS.  So, why should a technical person choose Software Support over other technical areas?  “I think it is mainly because of the high level of technical skills required to provide the best technical solutions to our customers. Oracle Software Support covers complex solutions going from Database or Middleware to a vast area of business applications (basically covering any needs that a large enterprise may have). Working with such software requires very strong skills both technical and functional for the different areas, going from Finance, Supply Chain Management, Manufacturing, Sales to other very specific business processes. Our customers are large enterprises that already have a support layer inside their organization and therefore the Oracle Technical Support Engineers are working with highly specialized staff (DBA’s, System/Application Admins, Implementation Consultants). This is a very important aspect for our engineers because they need to be highly skilled to match our customer’s specialist’s expectations”.  What’s the career path in your team? “Technical Analysts joining our teams have a clear growth path. The main focus is to become a master of the product they will support. I think one need 1 or 2 years to reach a good level of understanding the product and delivering optimal solutions because of the complexity of our products. At a later stage, engineers can choose their professional development areas based on the business needs and preferences and then further grow towards as technical expert or a management role. We have analysts that have more than 15 years of technical expertise and they still learn and grow in technical area. Important fact is, due to the expansion of the Romanian Software support center, there are various management opportunities. So, if you want to leverage your experience and if you want to have people management responsibilities Oracle Software Support is the place to be!”  Our last question to Serban was about the benefits of being part of Oracle Software Support. Here is what he said: “We believe that Oracle delivers “State of the art” Support level to our customers. This is not possible without high investment in our staff. We commit from the start to support any technical analyst that joins us (being junior or very senior) with any training needs they have for their job. We have various technical trainings as well as soft-skills trainings required for a customer facing professional to be successful in his role. Last but not least, we’re aiming to make Oracle Romania SW Support a global center of excellence which means we’re investing a lot in our employees.”  If you’re looking for a job where you can combine your strong technical skills with customer interaction Oracle Software Support is the place to be! Send us your CV at [email protected]. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Remote Scripted Installation of Sun/Oracle JRE

    - by chrisbunney
    I'm attempting to automate the installation of a Debian server (debian 6.0 squeeze 64bit). Part of the installation requires the Sun JRE package to be installed. This package has a licence agreement, which has to be accepted. I have a script which uses the following lines to accept and install the JRE: echo "sun-java6-bin shared/accepted-sun-dlj-v1-1 boolean true" | debconf-set-selections apt-get install -y sun-java6-jre This works fine when executing the script locally. However, I need to execute the script remotely using the ssh command, e.g.: ssh -i keyFile root@hostname './myScript' This doesn't work. In particular, it fails on apt-get install -y sun-java6-jre. It would seem that in spite of me setting the licence agreement to accepted, when run remotely in this manner it is ignored. Despite setting the value to true, I still get prompted to manually accept the agreement when I run this command: ssh -i keyFile root@hostname 'apt-get install -y sun-java6-jre' I suspect it is something to do with environment that is taken care of when running a proper terminal session, but have no idea what to try next to fix it. So, what do I have to do to get this command (and hence my deployment script) to run correctly when executing it remotely? Or is there an alternative way that allows me to install the JRE remotely by another means? Edit 0: I have compared the output of env when executed remotely via ssh and when executed via a local terminal session. The only difference between the outputs is that the local terminal session has the additional value TERM=xterm.

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >