Search Results

Search found 22879 results on 916 pages for 'case studies'.

Page 74/916 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Can't call function from within onOptionsItemSelected

    - by Kristy Welsh
    public boolean onOptionsItemSelected(MenuItem item) { //check selected menu item switch (item.getItemId()) { case R.id.exit: this.finish(); return true; case R.id.basic: Difficulty = DIFFICULTY_BASIC; Toast.makeText(YogaPosesActivity.this, "Difficulty is Basic", Toast.LENGTH_SHORT).show(); SetImageView(myDbHelper); return true; case R.id.advanced: Toast.makeText(YogaPosesActivity.this, "Difficulty is Advanced", Toast.LENGTH_SHORT).show(); Difficulty = DIFFICULTY_ADVANCED; SetImageView(myDbHelper); return true; case R.id.allPoses: Toast.makeText(YogaPosesActivity.this, "All Poses Will Be Displayed", Toast.LENGTH_SHORT).show(); Difficulty = DIFFICULTY_ADVANCED_AND_BASIC; SetImageView(myDbHelper); return true; default: return super.onOptionsItemSelected(item); } } I get an error when I call the SetImageView function, which was defined out of the OnCreate Activity. Can you not call a function unless it was defined inside the OnCreate? I get a nullPointer Exception when calling the function.

    Read the article

  • Best practices for (over)using Azure queues

    - by John
    Hi, I'm in the early phases of designing an Azure-based application. One of the things that attracts me to Azure is the scalability, given the variability of the demand I'm likely to expect. As such I'm trying to keep things loosely coupled so I can add instances when I need to. The recommendations I've seen for architecting an application for Azure include keeping web role logic to a minimum, and having processing done in worker roles, using queues to communicate and some sort of back-end store like SQL Azure or Azure Tables. This seems like a good idea to me as I can scale up either or both parts of the application without any issue. However I'm curious if there are any best practices (or if anyone has any experiences) for when it's best to just have the web role talk directly to the data store vs. sending data by the queue? I'm thinking of the case where I have a simple insert to do from the web role - while I could set this up as a message, send it on the queue, and have a worker role pick it up and do the insert, it seems like a lot of double-handling. However I also appreciate that it may be the case that this is better in the long run, in case the web role gets overwhelmed or more complex logic ends up being required for the insert. I realise this might be a case where the answer is "it depends entirely on the situation, check your perf metrics" - but if anyone has any thoughts I'd be very appreciative! Thanks John

    Read the article

  • row number over text column sort

    - by Marty Trenouth
    I'm having problems with dynamic sorting using ROW Number in SQL Server. I have it working but it's throwing errors on non numeric fields. What do I need to change to get sorts with Alpha Working??? ID Description 5 Test 6 Desert 3 A evil Ive got a Sql Prodcedure CREATE PROCEDURE [CRUDS].[MyTable_Search] -- Add the parameters for the stored procedure here -- Full Parameter List @ID int = NULL, @Description nvarchar(256) = NULL, @StartIndex int = 0, @Count int = null, @Order varchar(128) = 'ID asc' AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here Select * from ( Select ROW_NUMBER() OVER (Order By case when @Order = 'ID asc' then [TableName].ID when @Order = 'Description asc' then [TableName].Description end asc, case when @Order = 'ID desc' then [TableName].ID when @Order = 'Description desc' then [TableName].Description end desc ) as row, [TableName].* from [TableName] where (@ID IS NULL OR [TableName].ID = @ID) AND (@Description IS NULL OR [TableName].Description = @Description) ) as a where row > @StartIndex and (@Count is null or row <= @StartIndex + @Count) order by case when @Order = 'ID asc' then a.ID when @Order = 'Description asc' then a.Description end asc, case when @Order = 'ID desc' then a.ID when @Order = 'Description desc' then a.Description end desc END

    Read the article

  • Could this be considered a well-written PHP5 class?

    - by Ben Dauphinee
    I have been learning OOP principals on my own for a while, and taken a few cracks at writing classes. What I really need to know now is if I am actually using what I have learned correctly, or if I could improve as far as OOP is concerned. I have chopped a massive portion of code out of a class that I have been working on for a while now, and pasted it here. To all you skilled and knowledgeable programmers here I ask: Am I doing it wrong? class acl extends genericAPI{ // -- Copied from genericAPI class protected final function sanityCheck($what, $check, $vars){ switch($check){ case 'set': if(isset($vars[$what])){return(1);}else{return(0);} break; } } // --------------------------------- protected $db = null; protected $dataQuery = null; public function __construct(Zend_Db_Adapter_Abstract $db, $config = array()){ $this->db = $db; if(!empty($config)){$this->config = $config;} } protected function _buildQuery($selectType = null, $vars = array()){ // Removed switches for simplicity sake $this->dataQuery = $this->db->select( )->from( $this->config['table_users'], array('tf' => '(CASE WHEN count(*) > 0 THEN 1 ELSE 0 END)') )->where( $this->config['uidcol'] . ' = ?', $vars['uid'] ); } protected function _sanityRun_acl($sanitycheck, &$vars){ switch($sanitycheck){ case 'uid_set': if(!$this->sanityCheck('uid', 'set', $vars)){ throw new Exception(ERR_ACL_NOUID); } $vars['uid'] = settype($vars['uid'], 'integer'); break; } } private function user($action = null, $vars = array()){ switch($action){ case 'exists': $this->_sanityRun_acl('uid_set', $vars); $this->_buildQuery('user_exists_idcheck', $vars); return($this->db->fetchOne($this->dataQuery->__toString())); break; } } public function user_exists($uid){ return($this->user('exists', array('uid' => $uid))); } } $return = $acl_test->user_exists(1);

    Read the article

  • Counting a cell up per Objects

    - by Auro
    hey i got a problem once again :D a little info first: im trying to copy data from one table to an other table(structure is the same). now one cell needs to be incremented, beginns per group at 1 (just like a histroy). i have this table: create table My_Test/My_Test2 ( my_Id Number(8,0), my_Num Number(6,0), my_Data Varchar2(100)); (my_Id, my_Num is a nested PK) if i want to insert a new row, i need to check if the value in my_id already exists. if this is true then i need to use the next my_Num for this Id. i have this in my Table: My_Id My_Num My_Data 1 1 'test1' 1 2 'test2' 2 1 'test3' if i add now a row for my_Id 1, the row would look like this: i have this in my Table: My_Id My_Num My_Data 1 3 'test4' this sounds pretty easy ,now i need to make it in a SQL and on SQL Server i had the same problem and i used this: Insert Into My_Test (My_Id,My_Num,My_Data) SELECT my_Id, ( SELECT CASE ( CASE MAX(a.my_Num) WHEN NULL THEN 0 Else Max(A.My_Num) END) + b.My_Num WHEN NULL THEN 1 ELSE ( CASE MAX(a.My_Num) WHEN NULL THEN 0 Else Max(A.My_Num) END) + b.My_Num END From My_Test A where my_id = 1 ) ,My_Data From My_Test2 B where my_id = 1; this Select gives null back if no Rows are found in the subselect is there a way so i could use max in the case? and if it give null back it should use 0 or 1? greets Auro

    Read the article

  • JQuery quiz app - use <id> tag to toggle variable on/off

    - by hairyllama
    Hi, I am writing a jquery phonegap quiz app and have a number of categories from which a user can select via checkbox. Relevant questions belonging to those categories are then returned. However, I have two huge switch statements to change the relevant variables from 0 to 1 if the checkbox for that category is selected and vice versa (this info is used to build a compound db query). The value of the variable behind the checkbox is only ever 0 or 1, so is there a better way to do this? My HTML is: <h2>Categories</h2> <ul class="rounded"> <li>Cardiology<span class="toggle"><input type="checkbox" id="cardiology" /></span></li> <li>Respiratory<span class="toggle"><input type="checkbox" id="respiratory" /></span></li> <li>Gastrointestinal<span class="toggle"><input type="checkbox" id="gastrointestinal" /></span></li> <li>Neurology<span class="toggle"><input type="checkbox" id="neurology" /></span></li> </ul> My Javascript is along the lines of: var toggle_cardiology = 0; var toggle_respiratory = 0; var toggle_gastrointestinal = 0; var toggle_neurology = 0; $(function() { $('input[type="checkbox"]').bind('click',function() { if($(this).is(':checked')) { switch (this.id) { case "cardiology": toggle_cardiology = 1; break; case "respiratory": toggle_respiratory = 1; break; case "gastrointestinal": toggle_gastrointestinal = 1; break; case "neurology": toggle_neurology = 1; break; etc. (which is cumbersome with 10+ categories plus an else statement with a switch to change them back) I'm thinking of something along the lines of concatenating the HTML id tag onto the "toggle_" prefix - in pseudocode: if (toggle_ + this.id == 1){ toggle_ + this.id == 0} if (toggle_ + this.id == 0){ toggle_ + this.id == 1} Thanks, Nick.

    Read the article

  • C# specifying generic delegate type param at runtime

    - by smerlin
    following setup, i have several generic functions, and i need to choose the type and the function identified by two strings at runtime. my first try looked like this: public static class FOOBAR { public delegate void MyDelegateType(int param); public static void foo<T>(int param){...} public static void bar<T>(int param){...} public static void someMethod(string methodstr, string typestr) { MyDelegateType mydel; Type mytype; switch(typestr) { case "int": mytype = typeof(int); break; case "double": mytype = typeof(double); break; default: throw new InvalidTypeException(typestr); } switch(methodstr) { case "foo": mydel = foo<mytype>; //error break; case "bar": mydel = bar<mytype>; //error break; default: throw new InvalidTypeException(methodstr); } for(int i=0; i<1000; ++i) mydel(i); } } since this didnt work, i nested those switchs (a methodstr switch inside the typestr switch or viceversa), but that solution is really ugly and unmaintainable. The number of types is pretty much fixed, but the number of functions like foo or bar will increase by high numbers, so i dont want nested switchs. So how can i make this working without using nested switchs ?

    Read the article

  • Could this be considered a well-written class (am I using OOP correctly)?

    - by Ben Dauphinee
    I have been learning OOP principals on my own for a while, and taken a few cracks at writing classes. What I really need to know now is if I am actually using what I have learned correctly, or if I could improve as far as OOP is concerned. I have chopped a massive portion of code out of a class that I have been working on for a while now, and pasted it here. To all you skilled and knowledgeable programmers here I ask: Am I doing it wrong? class acl extends genericAPI{ // -- Copied from genericAPI class protected final function sanityCheck($what, $check, $vars){ switch($check){ case 'set': if(isset($vars[$what])){return(1);}else{return(0);} break; } } // --------------------------------- protected $db = null; protected $dataQuery = null; public function __construct(Zend_Db_Adapter_Abstract $db, $config = array()){ $this->db = $db; if(!empty($config)){$this->config = $config;} } protected function _buildQuery($selectType = null, $vars = array()){ // Removed switches for simplicity sake $this->dataQuery = $this->db->select( )->from( $this->config['table_users'], array('tf' => '(CASE WHEN count(*) > 0 THEN 1 ELSE 0 END)') )->where( $this->config['uidcol'] . ' = ?', $vars['uid'] ); } protected function _sanityRun_acl($sanitycheck, &$vars){ switch($sanitycheck){ case 'uid_set': if(!$this->sanityCheck('uid', 'set', $vars)){ throw new Exception(ERR_ACL_NOUID); } $vars['uid'] = settype($vars['uid'], 'integer'); break; } } private function user($action = null, $vars = array()){ switch($action){ case 'exists': $this->_sanityRun_acl('uid_set', $vars); $this->_buildQuery('user_exists_idcheck', $vars); return($this->db->fetchOne($this->dataQuery->__toString())); break; } } public function user_exists($uid){ return($this->user('exists', array('uid' => $uid))); } } $return = $acl_test->user_exists(1);

    Read the article

  • Strange behaviour when collapsing lines in XML bound WPF Datagrid

    - by Flossn
    i am using a wpf datagrid with a xml file as DataContext. All working good except for iterating thorough the table and collapsing individual rows. There are several checkboxes where the user can decide which kinds of rows he wants to see, dependent on their error level string. If a checkbox is checked, some of the rows are collapsed, others not. You need to uncheck the checkbox and check it again to collapse the ones of the first try and some of the others. If you recheck it again more rows are collapsed every time. I guess it has something to do with how much of the list is actually visible and how much not because of the window size. Thanks in advance. foreach (DataGridRow r in rows) { bool showRow = true; var tb = Datagrid.GetCell(dataGridEvents, r, 2).Content; string level = ((TextBlock)tb).Text; switch (level) { case "Warning": showRow = checkBoxWarnings.IsChecked.HasValue ? checkBoxWarnings.IsChecked.Value : false; break; case "Critical": showRow = checkBoxCritical.IsChecked.HasValue ? checkBoxCritical.IsChecked.Value : false; break; case "OK": showRow = checkBoxOK.IsChecked.HasValue ? checkBoxOK.IsChecked.Value : false; break; case "Unknown": showRow = checkBoxUnknown.IsChecked.HasValue ? checkBoxUnknown.IsChecked.Value : false; break; } r.Visibility = showRow ? Visibility.Visible : Visibility.Collapsed; }

    Read the article

  • Declaring an object of a conditional type with a System.Type

    - by Chapso
    I am attempting to launch a specific form depending on the selected node of a treeview on the doubleclick event. The code I need to use to launch the form is a little bulky becuase I have to ensure that the form is not disposed, and that the form is not already open, before launching a new instance. I'd like to have all of this checking happen in one place at the end of the function, which means that I have to be able to pass the right form type to the code at the end. I'm trying to do this with a System.Type, but that doesn't seem to be working. Could someone point me in the right direction, please? With TreeView.SelectedNode Dim formType As Type Select Case .Text Case "Email to VPs" formType = EmailForm.GetType() Case "Revise Replacers" formType = DedicatedReplacerForm.GetType() Case "Start Email" formType = EmailForm.GetType() End Select Dim form As formType Dim form As formType Try form = CType(.Tag, formType) If Not form.IsDisposed Then form.Activate() Exit Sub End If Catch ex As NullReferenceException 'This will error out the first time it is run as the form has not yet ' been defined. End Try form = New formType form.MdiParent = Me .Tag = form CType(TreeView.SelectedNode.Tag, Form).Show() End With

    Read the article

  • Trouble managing events in Flex/actionscript

    - by Zaka
    Hello all, I'm doing some newbie tests, so I decided to capture the keyboard events to move a rectangle. But I don't get the desired result. Unless I click on the TextArea box, I'm not able to capture the event key code. After that, all goes pretty well. I'm using Eclipse 3.3 + Flex 3.0 on Linux. Here's my code: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" enterFrame="enterFrame(event)" keyDown="onKeyDown(event)"> <mx:TextArea id="myText" x="200" y="200" width="100" height="100" /> <mx:Canvas id="myCanvas" x="0" y="0" width="100" height="100" /> <mx:Script> <![CDATA[ public var clearColor : uint = 0xFF456798; public var myPoint : Point = new Point(0,0); public function enterFrame(event:Event):void { myCanvas.graphics.clear(); myCanvas.graphics.beginFill(0xFF344ff0); myCanvas.graphics.drawRect(myPoint.x,myPoint.y,40,40); myCanvas.graphics.endFill(); } public function onKeyDown(event:KeyboardEvent):void { myText.text = "Keycode is: " + event.keyCode + "\n"; switch(event.keyCode) { case 37: //Left myPoint.x -= 1; break; case 38: //Up myPoint.y -= 1; break; case 39: //Right myPoint.x += 1; break; case 40: //Down myPoint.y += 1; break; } } ]]> </mx:Script> </mx:Application>

    Read the article

  • Dynamic jQuery dialog after data append w/o reloading page. Possible?

    - by Arun
    Howdy, So I have a page with an enormous table in a CRUD interface of sorts. Each link within a span calls a jQuery UI Dialog Form which fetches it's content from another page. When the action taking place (in this case, a creation) has completed, it appends the resulting new data to the table and forces a resort of the table. This all happens within the JS and the DOM. The problem with this, is that the new table row's CRUD links don't actually trigger the dialog form creation as all the original links in spans are only scanned on document.ready and since I'm not reloading the page, the new links cannot be seen. Code is as follows: $(document).ready(function() { var $loading = $('<img src="/images/loading.gif" alt="Loading">'); $('span a').each(function() { var $dialog = $('<div></div>') .append($loading.clone()); var $link = $(this).one('click', function() { // Dialog Stuff success: function(data) { $('#studies tbody').append( '<tr>' + '<td><span><a href="./?action=update&study=' + data.study_id + '" title="Update Study">Update</a></span></td>' + '</tr>' ); fdTableSort.init(#studies); // This re-sorts the table. $(this).dialog('close'); } $link.click(function() { $dialog.dialog('open'); return false; }); return false; }); }); }); Basically, my question is if there is any way in which to trigger a jQuery re-evaluation of the pages links without forcing me to do a browser page refresh?

    Read the article

  • In Castle Windsor, can I register a Interface component and get a proxy of the implementation?

    - by Thiado de Arruda
    Lets consider some cases: _windsor.Register(Component.For<IProductServices>().ImplementedBy<ProductServices>().Interceptors(typeof(SomeInterceptorType)); In this case, when I ask for a IProductServices windsor will proxy the interface to intercept the interface method calls. If instead I do this : _windsor.Register(Component.For<ProductServices>().Interceptors(typeof(SomeInterceptorType)); then I cant ask for windsor to resolve IProductServices, instead I ask for ProductServices and it will return a dynamic subclass that will intercept virtual method calls. Of course the dynamic subclass still implements 'IProductServices' My question is : Can I register the Interface component like the first case, and get the subclass proxy like in the second case?. There are two reasons for me wanting this: 1 - Because the code that is going to resolve cannot know about the ProductServices class, only about the IProductServices interface. 2 - Because some event invocations that pass the sender as a parameter, will pass the ProductServices object, and in the first case this object is a field on the dynamic proxy, not the real object returned by windsor. Let me give an example of how this can complicate things : Lets say I have a custom collection that does something when their items notify a property change: private void ItemChanged(object sender, PropertyChangedEventArgs e) { int senderIndex = IndexOf(sender); SomeActionOnItemIndex(senderIndex); } This code will fail if I added an interface proxy, because the sender will be the field in the interface proxy and the IndexOf(sender) will return -1.

    Read the article

  • NHibernate with string primary key and relationships

    - by John_
    I've have just been stumped with this problem for an hour and I annoyingly found the problem eventually. THE CIRCUMSTANCES I have a table which users a string as a primary key, this table has various many to one and many to many relationships all off this primary key. When searching for multiple items from the table all relationships were brought back. However whenever I tried to get the object by the primary key (string) it was not bringing back any relationships, they were always set to 0. THE PARTIAL SOLUTION So I looked into my logs to see what the SQL was doing and that was returning the correct results. So I tried various things in all sorts of random ways and eventually worked out it was. The case of the string being passed into the get method was not EXACTLY the same case as it was in the database, so when it tried to match up the relationship items with the main entity it was finding nothing (Or at least NHIbernate wasn't because as I stated above the SQL was actually returning the correct results) THE REAL SOLUTION Has anyone else come across this? If so how do you tell NHibernate to ignore case when matching SQL results to the entity? It is silly because it worked perfectly well before now all of a sudden it has started to pay attention to the case of the string.

    Read the article

  • Android - How to circular zoom/magnify part of image?

    - by IZI_Shadow_IZI
    I am trying to allow the user to touch the image and then basically a cirular magnifier will show that will allow the user to better select a certain area on the image. When the user releases the touch the magnified portion will dissapear. This is used on several photo editing apps and I am trying to implement my own version of it. The code I have below does magnify a circular portion of the imageview but does not delete or clear the zoom once I release my finger. I currently set a bitmap to a canvas using canvas = new Canvas(bitMap); and then set the imageview using takenPhoto.setImageBitmap(bitMap); I am not sure if I am going about it the right way. The onTouch code is below: zoomPos = new PointF(0,0); takenPhoto.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { int action = event.getAction(); switch (action) { case MotionEvent.ACTION_DOWN: zoomPos.x = event.getX(); zoomPos.y = event.getY(); matrix.reset(); matrix.postScale(2f, 2f, zoomPos.x, zoomPos.y); shader.setLocalMatrix(matrix); canvas.drawCircle(zoomPos.x, zoomPos.y, 20, shaderPaint); takenPhoto.invalidate(); break; case MotionEvent.ACTION_MOVE: zoomPos.x = event.getX(); zoomPos.y = event.getY(); matrix.reset(); matrix.postScale(2f, 2f, zoomPos.x, zoomPos.y); canvas.drawCircle(zoomPos.x, zoomPos.y, 20, shaderPaint); takenPhoto.invalidate(); break; case MotionEvent.ACTION_UP: //clear zoom here? break; case MotionEvent.ACTION_CANCEL: break; default: break; } return true; } });

    Read the article

  • How to save enum settings in Visual Studio project properties?

    - by zaidwaqi
    Hi, In the Settings tab in Visual Studio, I can see Name, Type, Scope, Value table. Define settings is intuitive if the data type is already within the Type drop-down list i.e. integer, string, long etc. But I can't find enum anywhere. How do I save enum settings then? For now, I have the following which clutter my code too much. public enum Action { LOCK = 9, FORCED_LOGOFF = 12, SHUTDOWN = 14, REBOOT, LOGOFF = FORCED_LOGOFF }; and I define Action as int in the setting. Then I have to do, switch (Properties.Settings.Default.Action) { case 9: SetAction(Action.LOCK); break; case 12: SetAction(Action.FORCED_LOGOFF); break; case 14: SetAction(Action.SHUTDOWN); break; case 15: SetAction(Action.REBOOT); break; default: SetAction(Action.LOCK); break; } Would be nice if I could simply do something like SetAction(Properties.Settings.Default.Action); to replace all above but I dont know how to save enum in setting. Hope my question is clear. Thanks.

    Read the article

  • showDialog in Activity not displaying dialog

    - by Mohit Deshpande
    Here is my code: public class TasksList extends ListActivity { ... private static final int COLUMNS_DIALOG = 7; private static final int ORDER_DIALOG = 8; ... /** * @see android.app.Activity#onCreateDialog(int) */ @Override protected Dialog onCreateDialog(int id) { Dialog dialog; final String[] columns; Cursor c = managedQuery(Tasks.CONTENT_URI, null, null, null, null); columns = c.getColumnNames(); final String[] order = { "Ascending", "Descending" }; switch (id) { case COLUMNS_DIALOG: AlertDialog.Builder columnDialog = new AlertDialog.Builder(this); columnDialog.setSingleChoiceItems(columns, -1, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { bundle.putString("column", columns[which]); } }); dialog = columnDialog.create(); case ORDER_DIALOG: AlertDialog.Builder orderDialog = new AlertDialog.Builder(this); orderDialog.setSingleChoiceItems(order, -1, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { String orderS; if (order[which].equalsIgnoreCase("Ascending")) orderS = "ASC"; else orderS = "DESC"; bundle.putString("order", orderS); } }); dialog = orderDialog.create(); default: dialog = null; } return dialog; } /** * @see android.app.Activity#onOptionsItemSelected(android.view.MenuItem) */ @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case SORT_MENU: showDialog(COLUMNS_DIALOG); showDialog(ORDER_DIALOG); String orderBy = bundle.getString("column") + bundle.getString("order"); Cursor tasks = managedQuery(Tasks.CONTENT_URI, projection, null, null, orderBy); adapter = new TasksAdapter(this, tasks); getListView().setAdapter(adapter); break; case FILTER_MENU: break; } return false; } The showDialog doesn't display the dialog. I used the Debugger and it does executes these statements, but the dialog doesn't show. }

    Read the article

  • [Scala] Using overloaded, typed methods on a collection

    - by stephanos
    I'm quite new to Scala and struggling with the following: I have database objects (type of BaseDoc) and value objects (type of BaseVO). Now there are multiple convert methods (all called 'convert') that take an instance of an object and convert it to the other type accordingly. For example: def convert(doc: ClickDoc): ClickVO = doc match { case null => null case _ => val result = new ClickVO result.x = doc.x result.y = doc.y result } Now I sometimes need to convert a list of objects. How would I do this - I tried: def convert[D <: MyBaseDoc, V <: BaseVO](docs: List[D]):List[V] = docs match { case List() => List() case xs => xs.map(doc => convert(doc)) } Which results in 'overloaded method value convert with alternatives ...'. I tried to add manifest information to it, but couldn't make it work. I couldn't even create one method for each because it'd say that they have the same parameter type after type erasure (List). Ideas welcome!

    Read the article

  • Variable from block is put into a calculation but throws off wrong reading

    - by user2926620
    I am having troubles with trying to retrieve a double variable that is already established outside the block and called inside but I want to return the value of the same variable so that I can apply it to a calculation. the variable that I want returned is: double quarter = 0; but when I plug it into quarter in my first else/if statement, it plugs in 0 and not the value in my switch block. What can I do to retrieve the value? double quarter = 0; //Date entry will be calculated by how much KW user enters switch (input) { case "2/15/13": quarter = kwUsed * 0.10; break; case "4/15/13": quarter = kwUsed * 0.12; break; case "8/15/13": quarter = kwUsed * 0.15; break; case "11/15/13": quarter = kwUsed * 0.15; break; default: System.out.println("Invalid date"); } //Declaring variables for calculations double base = 0; double over = 0; double excess = 0; double math1 = 0; double math2 = 0; //KW Calculations if (kwUsed <= 350) { base = quarter; }else if (kwUsed <= 500) { math1 = ((kwUsed - 350) * quarter); base = ((kwUsed * quarter) - math1); over = ((math1 * 0.1) + math1); }else if (kwUsed > 500) { math2 = ((kwUsed - 350) * 0.1); base = ((kwUsed * 0.1) - math2); math2 = ((kwUsed -350) - 50); over = ((math2 * 0.1) + (15 * 0.1)); double math3 =((kwUsed - 500) * 0.1); excess = ((math3 * 0.25) + math3); } Edited to clarify question.

    Read the article

  • How to move image in Applet?

    - by user1609804
    I want to move the character left, right up, and down in applet, but it is not moving at all. here is my code, help please import javax.swing.JPanel; import java.awt.image.BufferedImage; import java.io.*; import javax.imageio.ImageIO; import java.applet.*; import java.awt.event.*; import java.awt.*; public class drawCenter extends Applet { private int x,y;// the x and y of the position of the player private BufferedImage image, pos; public void init( ) { try { image = ImageIO.read(new File("pokemonCenter.png")); pos = ImageIO.read(new File("player/maleInGame.png")); } catch (IOException ex) { } x = 150; y = 171; } public void keyPressed(KeyEvent e) { int keyCode = e.getKeyCode(); switch( keyCode ) { case KeyEvent.VK_UP: if( y>0 ) { y=y-19; repaint(); } break; case KeyEvent.VK_DOWN: if( y<171 ) { y=y+19; repaint(); } break; case KeyEvent.VK_LEFT:if( x>0 ) { x=x-15; repaint(); } break; case KeyEvent.VK_RIGHT:if( x<285 ) { x=x+15; repaint(); } break; } e.consume(); } public void keyReleased(){ } public void paint( Graphics g ) { g.drawImage(image, 0, 0, null); g.drawImage(pos, x, y, null); } }

    Read the article

  • Bye Bye Year of the Dragon, Hello BPM

    - by Ajay Khanna
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As 2012 fades and we usher in a New Year, let’s look back at some of the hottest BPM trends and those we’ll be seeing more of in the coming months. BPM is as much about people as it is about technology. As people adopt new ways of engagement, new channels of communications and new devices to interact , the changes are reflected in BPM practices. As Social and Mobile have become an integral part of our personal and professional lives, we’ll see tighter integration of social and mobile with BPM, and more use cases emerging for smarter process management in 2013. And with products and services becoming less differentiated, organizations will strive to differentiate on Customer Experience. Concepts like Pace Layered Architecture and Dynamic Case Management will provide more flexibility and agility to IT groups and knowledge workers. Take a look at some of these capabilities we showcased (see video) at Oracle OpenWorld 2012. Some of these trends that will continue to gain momentum in 2013: Social networks and social media have provided a new way for businesses to engage with customers. A prospect is likely to reach out to their social network before making any purchase. Companies are increasingly engaging with customers in social networks to influence their purchasing decisions, as well as listening to customers via tools like sentiment analysis to see what customers think about a particular product or process. These insights are valuable as companies look to improve their processes. Inside organizations, workers are using social tools to engage with each other to design new products and processes. Social collaboration tools are being used to resolve issues where an employee needs consultation to reach a decision. Oracle BPM Suite includes social interaction as an integral part of its process design and work management to empower today’s business users. Ubiquitous smart mobile devices are trending as a tool of choice for many workers. Many companies are adopting the policy of “Bring Your Own Device,” and the device of choice is a tablet. Devices like smart phones and tablets not only provide mobility to workers and customers, but they also provide additional important information – the context. By integrating the mobile context (location, photos, and preferences) into your processes, organizations can make much more informed decisions, as well as offer more personalized service to customers. Using Oracle ADF Mobile, you can easily create user interfaces for mobile devices and also capture location data for process execution. Customer experience was at the forefront of trending topics in 2012. Organizations are trying to understand their customers better and offer them more personalized and differentiated services. Customer experience is paramount when companies design sales and support processes. Companies are looking to BPM to consistently and efficiently orchestrate customer facing processes across disparate systems, departments and channels of communication. Oracle BPM Suite provides just the right capabilities for organizations to design and deliver an excellent customer experience. Pace Layered Architecture strategy is gaining traction as a way to maximize agility and minimize disruption in organizations. It provides a framework to manage the evolution of your information system when different pieces of it are changing at different rates and need to be updated independent of one another. Oracle Fusion Middleware and Oracle BPM Suite are designed with this in mind. The database layer, integration layer, application layer, and process layer should not be required to change at the same time. Most of the business changes to policy or process can be done at the process layer without disrupting the whole infrastructure. By understanding the type of change needed at a particular level, organizations can become much more agile and efficient. Adaptive Case Management proposes more flexibility to manage processes or cases that do not follow a structured process flow. In such situations, the knowledge worker managing the case needs to evaluate what step should occur next because the sequence of steps can’t be predetermined. Another characteristic is that it requires much more collaboration than straight-through process. As simple processes become automated, and customers adopt more and more self-service, cases that reach the case workers are much more complex and need more investigation. Oracle BPM suite includes comprehensive adaptive case management capability to manage such unstructured and complex processes. Smart BPM or making your BPM intelligent has been the holy grail for BPM practitioners who imagined that one day BPM would become one with Business Intelligence, Business Activity Monitoring and Complex Event Processing, making it much more responsive and helpful in organizational decision making. In 2013, organizations will begin to deploy these intelligent BPM solutions. Oracle offers an integrated solution that brings together the powerful functionality of BI, BAM, event processing, and Real Time Decisions to help organizations create smart process based solutions. In order to help customers reach their BPM goals faster and remove risks associated with BPM initiatives, Oracle has introduced Oracle Process Accelerators, pre-built best practices applications built on Oracle BPM Suite that are fully production grade and ready to deploy. These are exiting times for BPM practitioners and there is so much to look forward to in 2013. We wish you a very happy and prosperous New Year 2013. Happy BPMing!

    Read the article

  • Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CD

    - by Trevor Bekolay
    Accidentally deleting a file is a terrible feeling. Not being able to boot into Windows and undelete that file makes that even worse. Fortunately, you can recover deleted files on NTFS hard drives from an Ubuntu Live CD. To show this process, we created four files on the desktop of a Windows XP machine, and then deleted them. We then booted up the same machine with the bootable Ubuntu 9.10 USB Flash Drive that we created last week. Once Ubuntu 9.10 boots up, open a terminal by clicking Applications in the top left of the screen, and then selecting Accessories > Terminal. To undelete our files, we first need to identify the hard drive that we want to undelete from. In the terminal window, type in: sudo fdisk –l and press enter. What you’re looking for is a line that ends with HPSF/NTFS (under the heading System). In our case, the device is “/dev/sda1”. This may be slightly different for you, but it will still begin with /dev/. Note this device name. If you have more than one hard drive partition formatted as NTFS, then you may be able to identify the correct partition by the size. If you look at the second line of text in the screenshot above, it reads “Disk /dev/sda: 136.4 GB, …” This means that the hard drive that Ubuntu has named /dev/sda is 136.4 GB large. If your hard drives are of different size, then this information can help you track down the right device name to use. Alternatively, you can just try them all, though this can be time consuming for large hard drives. Now that you know the name Ubuntu has assigned to your hard drive, we’ll scan it to see what files we can uncover. In the terminal window, type: sudo ntfsundelete <HD name> and hit enter. In our case, the command is: sudo ntfsundelete /dev/sda1 The names of files that can recovered show up in the far right column. The percentage in the third column tells us how much of that file can be recovered. Three of the four files that we originally deleted are showing up in this list, even though we shut down the computer right after deleting the four files – so even in ideal cases, your files may not be recoverable. Nevertheless, we have three files that we can recover – two JPGs and an MPG. Note: ntfsundelete is immediately available in the Ubuntu 9.10 Live CD. If you are in a different version of Ubuntu, or for some other reason get an error when trying to use ntfsundelete, you can install it by entering “sudo apt-get install ntfsprogs” in a terminal window. To quickly recover the two JPGs, we will use the * wildcard to recover all of the files that end with .jpg. In the terminal window, enter sudo ntfsundelete <HD name> –u –m *.jpg which is, in our case, sudo ntfsundelete /dev/sda1 –u –m *.jpg The two files are recovered from the NTFS hard drive and saved in the current working directory of the terminal. By default, this is the home directory of the current user, though we are working in the Desktop folder. Note that the ntfsundelete program does not make any changes to the original NTFS hard drive. If you want to take those files and put them back in the NTFS hard drive, you will have to move them there after they are undeleted with ntfsundelete. Of course, you can also put them on your flash drive or open Firefox and email them to yourself – the sky’s the limit! We have one more file to undelete – our MPG. Note the first column on the far left. It contains a number, its Inode. Think of this as the file’s unique identifier. Note this number. To undelete a file by its Inode, enter the following in the terminal: sudo ntfsundelete <HD name> –u –i <Inode> In our case, this is: sudo ntfsundelete /dev/sda1 –u –i 14159 This recovers the file, along with an identifier that we don’t really care about. All three of our recoverable files are now recovered. However, Ubuntu lets us know visually that we can’t use these files yet. That’s because the ntfsundelete program saves the files as the “root” user, not the “ubuntu” user. We can verify this by typing the following in our terminal window: ls –l We want these three files to be owned by ubuntu, not root. To do this, enter the following in the terminal window: sudo chown ubuntu <Files> If the current folder has other files in it, you may not want to change their owner to ubuntu. However, in our case, we only have these three files in this folder, so we will use the * wildcard to change the owner of all three files. sudo chown ubuntu * The files now look normal, and we can do whatever we want with them. Hopefully you won’t need to use this tip, but if you do, ntfsundelete is a nice command-line utility. It doesn’t have a fancy GUI like many of the similar Windows programs, but it is a powerful tool that can recover your files quickly. See ntfsundelete’s manual page for more detailed usage information Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDUse Ubuntu Live CD to Backup Files from Your Dead Windows ComputerCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy WayGuide to Using Check Disk in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • WebSocket and Java EE 7 - Getting Ready for JSR 356 (TOTD #181)

    - by arungupta
    WebSocket is developed as part of HTML 5 specification and provides a bi-directional, full-duplex communication channel over a single TCP socket. It provides dramatic improvement over the traditional approaches of Polling, Long-Polling, and Streaming for two-way communication. There is no latency from establishing new TCP connections for each HTTP message. There is a WebSocket API and the WebSocket Protocol. The Protocol defines "handshake" and "framing". The handshake defines how a normal HTTP connection can be upgraded to a WebSocket connection. The framing defines wire format of the message. The design philosophy is to keep the framing minimum to avoid the overhead. Both text and binary data can be sent using the API. WebSocket may look like a competing technology to Server-Sent Events (SSE), but they are not. Here are the key differences: WebSocket can send and receive data from a client. A typical example of WebSocket is a two-player game or a chat application. Server-Sent Events can only push data data to the client. A typical example of SSE is stock ticker or news feed. With SSE, XMLHttpRequest can be used to send data to the server. For server-only updates, WebSockets has an extra overhead and programming can be unecessarily complex. SSE provides a simple and easy-to-use model that is much better suited. SSEs are sent over traditional HTTP and so no modification is required on the server-side. WebSocket require servers that understand the protocol. SSE have several features that are missing from WebSocket such as automatic reconnection, event IDs, and the ability to send arbitrary events. The client automatically tries to reconnect if the connection is closed. The default wait before trying to reconnect is 3 seconds and can be configured by including "retry: XXXX\n" header where XXXX is the milliseconds to wait before trying to reconnect. Event stream can include a unique event identifier. This allows the server to determine which events need to be fired to each client in case the connection is dropped in between. The data can span multiple lines and can be of any text format as long as EventSource message handler can process it. WebSockets provide true real-time updates, SSE can be configured to provide close to real-time by setting appropriate timeouts. OK, so all excited about WebSocket ? Want to convert your POJOs into WebSockets endpoint ? websocket-sdk and GlassFish 4.0 is here to help! The complete source code shown in this project can be downloaded here. On the server-side, the WebSocket SDK converts a POJO into a WebSocket endpoint using simple annotations. Here is how a WebSocket endpoint will look like: @WebSocket(path="/echo")public class EchoBean { @WebSocketMessage public String echo(String message) { return message + " (from your server)"; }} In this code "@WebSocket" is a class-level annotation that declares a POJO to accept WebSocket messages. The path at which the messages are accepted is specified in this annotation. "@WebSocketMessage" indicates the Java method that is invoked when the endpoint receives a message. This method implementation echoes the received message concatenated with an additional string. The client-side HTML page looks like <div style="text-align: center;"> <form action=""> <input onclick="send_echo()" value="Press me" type="button"> <input id="textID" name="message" value="Hello WebSocket!" type="text"><br> </form></div><div id="output"></div> WebSocket allows a full-duplex communication. So the client, a browser in this case, can send a message to a server, a WebSocket endpoint in this case. And the server can send a message to the client at the same time. This is unlike HTTP which follows a "request" followed by a "response". In this code, the "send_echo" method in the JavaScript is invoked on the button click. There is also a <div> placeholder to display the response from the WebSocket endpoint. The JavaScript looks like: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/websockets/echo"; var websocket = new WebSocket(wsUri); websocket.onopen = function(evt) { onOpen(evt) }; websocket.onmessage = function(evt) { onMessage(evt) }; websocket.onerror = function(evt) { onError(evt) }; function init() { output = document.getElementById("output"); } function send_echo() { websocket.send(textID.value); writeToScreen("SENT: " + textID.value); } function onOpen(evt) { writeToScreen("CONNECTED"); } function onMessage(evt) { writeToScreen("RECEIVED: " + evt.data); } function onError(evt) { writeToScreen('<span style="color: red;">ERROR:</span> ' + evt.data); } function writeToScreen(message) { var pre = document.createElement("p"); pre.style.wordWrap = "break-word"; pre.innerHTML = message; output.appendChild(pre); } window.addEventListener("load", init, false);</script> In this code The URI to connect to on the server side is of the format ws://<HOST>:<PORT>/websockets/<PATH> "ws" is a new URI scheme introduced by the WebSocket protocol. <PATH> is the path on the endpoint where the WebSocket messages are accepted. In our case, it is ws://localhost:8080/websockets/echo WEBSOCKET_SDK-1 will ensure that context root is included in the URI as well. WebSocket is created as a global object so that the connection is created only once. This object establishes a connection with the given host, port and the path at which the endpoint is listening. The WebSocket API defines several callbacks that can be registered on specific events. The "onopen", "onmessage", and "onerror" callbacks are registered in this case. The callbacks print a message on the browser indicating which one is called and additionally also prints the data sent/received. On the button click, the WebSocket object is used to transmit text data to the endpoint. Binary data can be sent as one blob or using buffering. The HTTP request headers sent for the WebSocket call are: GET ws://localhost:8080/websockets/echo HTTP/1.1Origin: http://localhost:8080Connection: UpgradeSec-WebSocket-Extensions: x-webkit-deflate-frameHost: localhost:8080Sec-WebSocket-Key: mDbnYkAUi0b5Rnal9/cMvQ==Upgrade: websocketSec-WebSocket-Version: 13 And the response headers received are Connection:UpgradeSec-WebSocket-Accept:q4nmgFl/lEtU2ocyKZ64dtQvx10=Upgrade:websocket(Challenge Response):00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 The headers are shown in Chrome as shown below: The complete source code shown in this project can be downloaded here. The builds from websocket-sdk are integrated in GlassFish 4.0 builds. Would you like to live on the bleeding edge ? Then follow the instructions below to check out the workspace and install the latest SDK: Check out the source code svn checkout https://svn.java.net/svn/websocket-sdk~source-code-repository Build and install the trunk in your local repository as: mvn install Copy "./bundles/websocket-osgi/target/websocket-osgi-0.3-SNAPSHOT.jar" to "glassfish3/glassfish/modules/websocket-osgi.jar" in your GlassFish 4 latest promoted build. Notice, you need to overwrite the JAR file. Anybody interested in building a cool application using WebSocket and get it running on GlassFish ? :-) This work will also feed into JSR 356 - Java API for WebSocket. On a lighter side, there seems to be less agreement on the name. Here are some of the options that are prevalent: WebSocket (W3C API, the URL is www.w3.org/TR/websockets though) Web Socket (HTML5 Demos - html5demos.com/web-socket) Websocket (Jenkins Plugin - wiki.jenkins-ci.org/display/JENKINS/Websocket%2BPlugin) WebSockets (Used by Mozilla - developer.mozilla.org/en/WebSockets, but use WebSocket as well) Web sockets (HTML5 Working Group - www.whatwg.org/specs/web-apps/current-work/multipage/network.html) Web Sockets (Chrome Blog - blog.chromium.org/2009/12/web-sockets-now-available-in-google.html) I prefer "WebSocket" as that seems to be most common usage and used by the W3C API as well. What do you use ?

    Read the article

  • How to Achieve OC4J RMI Load Balancing

    - by fip
    This is an old, Oracle SOA and OC4J 10G topic. In fact this is not even a SOA topic per se. Questions of RMI load balancing arise when you developed custom web applications accessing human tasks running off a remote SOA 10G cluster. Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusions in the field how OC4J RMI load balancing work. Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public. Here is the tech note: Overview A typical use case in Oracle SOA is that you are building web based, custom human tasks UI that will interact with the task services housed in a remote BPEL 10G cluster. Or, in a more generic way, you are just building a web based application in Java that needs to interact with the EJBs in a remote OC4J cluster. In either case, you are talking to an OC4J cluster as RMI client. Then immediately you must ask yourself the following questions: 1. How do I make sure that the web application, as an RMI client, even distribute its load against all the nodes in the remote OC4J cluster? 2. How do I make sure that the web application, as an RMI client, is resilient to the node failures in the remote OC4J cluster, so that in the unlikely case when one of the remote OC4J nodes fail, my web application will continue to function? That is the topic of how to achieve load balancing with OC4J RMI client. Solutions You need to configure and code RMI load balancing in two places: 1. Provider URL can be specified with a comma separated list of URLs, so that the initial lookup will land to one of the available URLs. 2. Choose a proper value for the oracle.j2ee.rmi.loadBalance property, which, along side with the PROVIDER_URL property, is one of the JNDI properties passed to the JNDI lookup.(http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI) More details below: About the PROVIDER_URL The JNDI property java.name.provider.url's job is, when the client looks up for a new context at the very first time in the client session, to provide a list of RMI context The value of the JNDI property java.name.provider.url goes by the format of a single URL, or a comma separate list of URLs. A single URL. For example: opmn:ormi://host1:6003:oc4j_instance1/appName1 A comma separated list of multiple URLs. For examples:  opmn:ormi://host1:6003:oc4j_instanc1/appName, opmn:ormi://host2:6003:oc4j_instance1/appName, opmn:ormi://host3:6003:oc4j_instance1/appName When the client looks up for a new Context the very first time in the client session, it sends a query against the OPMN referenced by the provider URL. The OPMN host and port specifies the destination of such query, and the OC4J instance name and appName are actually the “where clause” of the query. When the PROVIDER URL reference a single OPMN server Let's consider the case when the provider url only reference a single OPMN server of the destination cluster. In this case, that single OPMN server receives the query and returns a list of the qualified Contexts from all OC4Js within the cluster, even though there is a single OPMN server in the provider URL. A context represent a particular starting point at a particular server for subsequent object lookup. For example, if the URL is opmn:ormi://host1:6003:oc4j_instance1/appName, then, OPMN will return the following contexts: appName on oc4j_instance1 on host1 appName on oc4j_instance1 on host2, appName on oc4j_instance1 on host3,  (provided that host1, host2, host3 are all in the same cluster) Please note that One OPMN will be sufficient to find the list of all contexts from the entire cluster that satisfy the JNDI lookup query. You can do an experiment by shutting down appName on host1, and observe that OPMN on host1 will still be able to return you appname on host2 and appName on host3. When the PROVIDER URL reference a comma separated list of multiple OPMN servers When the JNDI propery java.naming.provider.url references a comma separated list of multiple URLs, the lookup will return the exact same things as with the single OPMN server: a list of qualified Contexts from the cluster. The purpose of having multiple OPMN servers is to provide high availability in the initial context creation, such that if OPMN at host1 is unavailable, client will try the lookup via OPMN on host2, and so on. After the initial lookup returns and cache a list of contexts, the JNDI URL(s) are no longer used in the same client session. That explains why removing the 3rd URL from the list of JNDI URLs will not stop the client from getting the EJB on the 3rd server. About the oracle.j2ee.rmi.loadBalance Property After the client acquires the list of contexts, it will cache it at the client side as “list of available RMI contexts”.  This list includes all the servers in the destination cluster. This list will stay in the cache until the client session (JVM) ends. The RMI load balancing against the destination cluster is happening at the client side, as the client is switching between the members of the list. Whether and how often the client will fresh the Context from the list of Context is based on the value of the  oracle.j2ee.rmi.loadBalance. The documentation at http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI list all the available values for the oracle.j2ee.rmi.loadBalance. Value Description client If specified, the client interacts with the OC4J process that was initially chosen at the first lookup for the entire conversation. context Used for a Web client (servlet or JSP) that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be returned each time InitialContext() is invoked. lookup Used for a standalone client that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be created each time the client calls Context.lookup(). Please note the regardless of the setting of oracle.j2ee.rmi.loadBalance property, the “refresh” only occurs at the client. The client can only choose from the "list of available context" that was returned and cached from the very first lookup. That is, the client will merely get a new Context object from the “list of available RMI contexts” from the cache at the client side. The client will NOT go to the OPMN server again to get the list. That also implies that if you are adding a node to the server cluster AFTER the client’s initial lookup, the client would not know it because neither the server nor the client will initiate a refresh of the “list of available servers” to reflect the new node. About High Availability (i.e. Resilience Against Node Failure of Remote OC4J Cluster) What we have discussed above is about load balancing. Let's also discuss high availability. This is how the High Availability works in RMI: when the client use the context but get an exception such as socket is closed, it knows that the server referenced by that Context is problematic and will try to get another unused Context from the “list of available contexts”. Again, this list is the list that was returned and cached at the very first lookup in the entire client session.

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >