Search Results

Search found 5349 results on 214 pages for 'override'.

Page 144/214 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • RemoteViewsService never gets called whenever we update app widget

    - by user3689160
    I am making one task widget application. This is a simple widget application which can be used to add new tasks by clicking on "+New Task". So ideally what I have done is, I've added a widget first, then when we click on "+New Task" a new activity opens up from where we can type in a new task and it should get updated in the ListView inside the home screen widget. Problem: Now whenever I add a new task, the ListView does not get updated. The data gets inside the database a new is created as well. My onUpdate(...) method gets called as well. There is a WidgetService.java which is a RemoteViewService that is being used to call the RemoteViewFactory but WidgetService.java gets called only when we create the widget, never after that. WidgetService.java never gets called again. I have the following setup MyWidgetProvider.java AppWidgetProvider class for updating the widget and filling its remote views. public class MyWidgetProvider extends AppWidgetProvider { public static int randomNumber=132; static RemoteViews remoteViews = null; @Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { updateList(context,appWidgetManager,appWidgetIds); } public static void updateList(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds){ final int N = appWidgetIds.length; Log.d("MyWidgetProvider", "length="+appWidgetIds.length+""); for (int i = 0; i<N; ++i) { Log.d("MyWidgetProvider", "value"+ i + "="+appWidgetIds[i]); // Calling updateWidgetListView(context, appWidgetIds[i]); to load the listview with data remoteViews = updateWidgetListView(context, appWidgetIds[i]); // Updating the Widget with the new data filled remoteviews appWidgetManager.updateAppWidget(appWidgetIds[i], remoteViews); appWidgetManager.notifyAppWidgetViewDataChanged(appWidgetIds[i], R.id.lv_tasks); } } private static RemoteViews updateWidgetListView(Context context, int appWidgetId) { Intent svcIntent = null; remoteViews = new RemoteViews(context.getPackageName(),R.layout.widget_demo); //RemoteViews Service needed to provide adapter for ListView svcIntent = new Intent(context, WidgetService.class); //passing app widget id to that RemoteViews Service svcIntent.putExtra(AppWidgetManager.EXTRA_APPWIDGET_ID, appWidgetId); //setting a unique Uri to the intent //binding the data from the WidgetService to the svcIntent svcIntent.setData(Uri.parse(svcIntent.toUri(Intent.URI_INTENT_SCHEME))); //setting adapter to listview of the widget remoteViews.setRemoteAdapter(R.id.lv_tasks, svcIntent); //setting click listner on the "New Task" (TextView) of the widget remoteViews.setOnClickPendingIntent(R.id.tv_add_task, buildButtonPendingIntent(context)); return remoteViews; } // Just to create PendingIntent, this will be broadcasted everytime textview containing "New Task" is clicked and OnReceive method of MyWidgetIntentReceiver.java will run public static PendingIntent buildButtonPendingIntent(Context context) { Intent intent = new Intent(); intent.putExtra("TASK_DONE_VALUE_VALID", false); intent.setAction("com.ommzi.intent.action.CLEARTASK"); return PendingIntent.getBroadcast(context, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT); } // This method is defined here so that we can update the remoteviews from other classes as well, like we'll do it in MyWidgetIntentReceiver.java public static void pushWidgetUpdate(Context context, RemoteViews remoteViews) { ComponentName myWidget = new ComponentName(context, MyWidgetProvider.class); AppWidgetManager manager = AppWidgetManager.getInstance(context); manager.updateAppWidget(myWidget, remoteViews); } } WidgetService.java A RemoteViewsService class for creating a RemoteViewsFactory. public class WidgetService extends RemoteViewsService { @Override public RemoteViewsFactory onGetViewFactory(Intent intent) { int appWidgetId = intent.getIntExtra(AppWidgetManager.EXTRA_APPWIDGET_ID,AppWidgetManager.INVALID_APPWIDGET_ID); return (new ListProvider(this.getApplicationContext(), intent)); } } ListProvider.java A RemoteViewsFactory class for filling the ListView inside the widget. public class ListProvider implements RemoteViewsFactory { private List<TemplateTaskData> tasks; private Context context = null; private int appWidgetId; int increment=0; public ListProvider(Context context, Intent intent) { this.context = context; appWidgetId = intent.getIntExtra(AppWidgetManager.EXTRA_APPWIDGET_ID,AppWidgetManager.INVALID_APPWIDGET_ID); //appWidgetId = Integer.valueOf(intent.getData().getSchemeSpecificPart())- MyWidgetProvider.randomNumber; populateListItem(); } private void populateListItem() { tasks = new ArrayList<TemplateTaskData>(); com.ommzi.database.sqliteDataBase letStartDB = new com.ommzi.database.sqliteDataBase(context); letStartDB.open(); tasks=letStartDB.getData(); letStartDB.close(); } public int getCount() { return tasks.size(); } public long getItemId(int position) { return position; } public RemoteViews getViewAt(int position) { // We are only showing the text view field here as there is limitation on using Checkbox within the widget // We prefer to on an activity for the user to make comprehensive changes as changes inside the widget is always limited. // To view the supported list of views for the widget you can visit http://developer.android.com/guide/topics/appwidgets/index.html final RemoteViews remoteView = new RemoteViews(context.getPackageName(), R.layout.screen_wcitem); TemplateTaskData taskDTO = tasks.get(position); Log.d("My ListProvider", "1"); remoteView.setTextViewText(R.id.wc_add_task, taskDTO.getTaskName()); if(taskDTO.getTaskDone()){ remoteView.setImageViewResource(R.id.imageView1, R.drawable.checked); }else{ remoteView.setImageViewResource(R.id.imageView1, R.drawable.unchecked); } Intent intent = new Intent(); intent.setAction("com.ommzi.intent.action.GETTASK"); intent.putExtra("TASK_DONE_VALUE_VALID", true); intent.putExtra("ROWID", taskDTO.getRowId()); intent.putExtra("TASK_NAME", taskDTO.getTaskName()); intent.putExtra("TASK_DONE_VALUE", taskDTO.getTaskDone()); remoteView.setOnClickPendingIntent(R.id.imageView1, PendingIntent.getBroadcast(context, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT)); Log.d("My ListProvider, Inside getView", "Value of Incrementor=" + increment + "\n ROWID=" + taskDTO.getRowId() + "\n TASK_NAME=" + taskDTO.getTaskName() + "\n TASK_DONE_VALUE=" +taskDTO.getTaskDone() + "\n Total Database Size=" + tasks.size()); return remoteView; } public RemoteViews getLoadingView() { // TODO Auto-generated method stub return null; } public int getViewTypeCount() { // TODO Auto-generated method stub return 1; } public boolean hasStableIds() { // TODO Auto-generated method stub return false; } public void onCreate() { // TODO Auto-generated method stub } public void onDataSetChanged() { // TODO Auto-generated method stub } public void onDestroy() { // TODO Auto-generated method stub } } GetTaskActivity.java This is another activity that is fired from a BroadcastReceiver(MyWidgetIntentReceiver.java). Purpose of this is to get a new task added to the list of task. MyWidgetIntentReceiver.java BroadcastReceiver fired using a pending intent that starts an activity GetTaskActivity.java. Please help me out with this problem. I am seen all the posts here and on other websites no body is having any solution. But I am sure a solution exist to this. Thanks for your help!

    Read the article

  • Not able to get data from Json completely

    - by Abhinav Raja
    i am getting JSON data from http://abinet.org/?json=1 and displaying the titles in a ListView. the code is working fine but the problem is, it is skipping few titles in my ListView and one title is being repeated. You can see the json data from url given above by copy paste it in JSON editor online http://www.jsoneditoronline.org/ i want titles in the "posts" array to be displayed in ListView, however it is being displayed like this: if you see the JSON data from the link above, its missing like 3 titles (they should come between the first and second title) and 5th title is being repeated. Dont know why this is happening. What minor adjustments i need to do? Please help me. this is my code : public class MainActivity extends Activity { // URL to get contacts JSON private static String url = "http://abinet.org/?json=1"; // JSON Node names private static final String TAG_POSTS = "posts"; static final String TAG_TITLE = "title"; private ProgressDialog pDialog; JSONArray contacts = null; TextView img_url; ArrayList<HashMap<String, Object>> contactList; ListView lv; LazyAdapter adapter; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); lv = (ListView) findViewById(R.id.newslist); contactList = new ArrayList<HashMap<String, Object>>(); new GetContacts().execute(); } private class GetContacts extends AsyncTask<Void, Void, Void> { protected void onPreExecute() { super.onPreExecute(); // Showing progress dialog pDialog = new ProgressDialog(MainActivity.this); pDialog.setMessage("Please wait..."); pDialog.setCancelable(false); pDialog.show(); } protected Void doInBackground(Void... arg0) { // Making a request to url and getting response JSONParser jParser = new JSONParser(); // Getting JSON from URL JSONObject jsonObj = jParser.getJSONFromUrl(url); // if (jsonStr != null) { try { // Getting JSON Array node contacts = jsonObj.getJSONArray(TAG_POSTS); // looping through All Contacts for (int i = 0; i < contacts.length(); i++) { // JSONObject c = contacts.getJSONObject(i); JSONObject posts = contacts.getJSONObject(i); String title = posts.getString(TAG_TITLE).replace("&#8217;", "'"); JSONArray attachment = posts.getJSONArray("attachments"); for (int j = 0; j< attachment.length(); j++){ JSONObject obj = attachment.getJSONObject(j); JSONObject image = obj.getJSONObject("images"); JSONObject image_small = image.getJSONObject("thumbnail"); String imgurl = image_small.getString("url"); HashMap<String, Object> contact = new HashMap<String, Object>(); contact.put("image_url", imgurl); contact.put(TAG_TITLE, title); contactList.add(contact); } } } catch (JSONException e) { e.printStackTrace(); } return null; } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); // Dismiss the progress dialog if (pDialog.isShowing()) pDialog.dismiss(); adapter=new LazyAdapter(MainActivity.this, contactList); lv.setAdapter(adapter); } } } this is my JsonParser class (although its not required): public JSONParser() { } public JSONObject getJSONFromUrl(String url) { // Making HTTP request try { // defaultHttpClient DefaultHttpClient httpClient = new DefaultHttpClient(); HttpPost httpPost = new HttpPost(url); HttpResponse httpResponse = httpClient.execute(httpPost); HttpEntity httpEntity = httpResponse.getEntity(); is = httpEntity.getContent(); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "n"); } is.close(); json = sb.toString(); } catch (Exception e) { Log.e("Buffer Error", "Error converting result " + e.toString()); } // try parse the string to a JSON object try { jObj = new JSONObject(json); } catch (JSONException e) { Log.e("JSON Parser", "Error parsing data " + e.toString()); } // return JSON String return jObj; } } and this is adapter class: public class LazyAdapter extends BaseAdapter { private Activity activity; private ArrayList<HashMap<String, Object>> data; private static LayoutInflater inflater=null; public LazyAdapter(Activity a,ArrayList<HashMap<String, Object>> d) { activity = a; data=d; inflater = (LayoutInflater)activity.getSystemService(Context.LAYOUT_INFLATER_SERVICE); } public int getCount() { return data.size(); } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } public View getView(int position, View convertView, ViewGroup parent) { View vi=convertView; if(convertView==null) vi = inflater.inflate(R.layout.third_row, null); TextView title = (TextView)vi.findViewById(R.id.headline3); // title SmartImageView iv = (SmartImageView) vi.findViewById(R.id.imageicon); HashMap<String, Object> song = new HashMap<String, Object>(); song = data.get(position); // Setting all values in listview title.setText((CharSequence) song.get(MainActivity.TAG_TITLE)); iv.setImageUrl((String) song.get("image_url")); thumb_image); return vi; } } Please help me. I am stuck at this for more than a week now. I think there is just something to be changed in my MainActivity class.

    Read the article

  • Get the current location of the Gps? Showing the default one

    - by Gagandeep
    Need help Urgent!!!!! Did changes with help but still unsuccessful... I have to request location updates, but I am unsuccessful in implementing that... i modified the code but need help so that i can see the current location. PLEASE look through my code and help please.. I am learning this and new to this concept and android.. any help would be appreciated here is my code: package com.GoogleMaps; import java.util.List; import com.google.android.maps.GeoPoint; import com.google.android.maps.MapActivity; import com.google.android.maps.MapController; import com.google.android.maps.MapView; import com.google.android.maps.Overlay; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.graphics.Paint; import android.graphics.Point; import android.graphics.drawable.Drawable; import android.location.Location; import android.location.LocationListener; import android.location.LocationManager; import android.os.Bundle; import android.widget.Toast; public class MapsActivity extends MapActivity { /** Called when the activity is first created. */ private MapView mapView; private LocationManager lm; private LocationListener ll; private MapController mc; GeoPoint p = null; Drawable defaultMarker = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mapView = (MapView)findViewById(R.id.mapview); //show zoom in/out buttons mapView.setBuiltInZoomControls(true); //Standard view of the map(map/sat) mapView.setSatellite(false); // get zoom tool mapView.setBuiltInZoomControls(true); //get controller of the map for zooming in/out mc = mapView.getController(); // Zoom Level mc.setZoom(18); lm = (LocationManager)getSystemService(Context.LOCATION_SERVICE); ll = new MyLocationListener(); lm.requestLocationUpdates( LocationManager.GPS_PROVIDER, 0, 0, ll); //Get the current location in start-up lm = (LocationManager)getSystemService(Context.LOCATION_SERVICE); ll = new MyLocationListener(); lm.requestLocationUpdates( LocationManager.GPS_PROVIDER, 0, 0, ll); //Get the current location in start-up if (lm.getLastKnownLocation(LocationManager.GPS_PROVIDER) != null){ GeoPoint p = new GeoPoint( (int)(lm.getLastKnownLocation(LocationManager.GPS_PROVIDER).getLatitude()*1000000), (int)(lm.getLastKnownLocation(LocationManager.GPS_PROVIDER).getLongitude()*1000000)); mc.animateTo(p); } MyLocationOverlay myLocationOverlay = new MyLocationOverlay(); List<Overlay> list = mapView.getOverlays(); list.add(myLocationOverlay); } protected class MyLocationOverlay extends com.google.android.maps.Overlay { @Override public boolean draw(Canvas canvas, MapView mapView, boolean shadow, long when) { Paint paint = new Paint(); super.draw(canvas, mapView, shadow); GeoPoint p = null; // Converts lat/lng-Point to OUR coordinates on the screen. Point myScreenCoords = new Point(); mapView.getProjection().toPixels(p, myScreenCoords); paint.setStrokeWidth(1); paint.setARGB(255, 255, 255, 255); paint.setStyle(Paint.Style.STROKE); Bitmap bmp = BitmapFactory.decodeResource(getResources(), R.drawable.ic_launcher); canvas.drawBitmap(bmp, myScreenCoords.x, myScreenCoords.y, paint); canvas.drawText("I am here...", myScreenCoords.x, myScreenCoords.y, paint); return true; } } private class MyLocationListener implements LocationListener{ public void onLocationChanged(Location argLocation) { // TODO Auto-generated method stub p = new GeoPoint((int)(argLocation.getLatitude()*1000000), (int)(argLocation.getLongitude()*1000000)); Toast.makeText(getBaseContext(), "New location latitude [" +argLocation.getLatitude() + "] longitude [" + argLocation.getLongitude()+"]", Toast.LENGTH_SHORT).show(); mc.animateTo(p); mapView.invalidate(); // call this so UI of map was updated } public void onProviderDisabled(String provider) { // TODO Auto-generated method stub } public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } } protected boolean isRouteDisplayed() { return false; } } catlog: 11-29 17:40:42.699: D/dalvikvm(371): GC_FOR_MALLOC freed 6074 objects / 369952 bytes in 74ms 11-29 17:40:42.970: I/MapActivity(371): Handling network change notification:CONNECTED 11-29 17:40:42.980: E/MapActivity(371): Couldn't get connection factory client 11-29 17:40:43.190: D/AndroidRuntime(371): Shutting down VM 11-29 17:40:43.190: W/dalvikvm(371): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 11-29 17:40:43.280: E/AndroidRuntime(371): FATAL EXCEPTION: main 11-29 17:40:43.280: E/AndroidRuntime(371): java.lang.NullPointerException 11-29 17:40:43.280: E/AndroidRuntime(371): at com.google.android.maps.PixelConverter.toPixels(PixelConverter.java:71) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.google.android.maps.PixelConverter.toPixels(PixelConverter.java:61) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.GoogleMaps.MapsActivity$MyLocationOverlay.draw(MapsActivity.java:106) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.google.android.maps.OverlayBundle.draw(OverlayBundle.java:42) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.google.android.maps.MapView.onDraw(MapView.java:494) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.View.draw(View.java:6740) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.drawChild(ViewGroup.java:1640) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1367) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.drawChild(ViewGroup.java:1638) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1367) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.View.draw(View.java:6743) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.widget.FrameLayout.draw(FrameLayout.java:352) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.drawChild(ViewGroup.java:1640) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1367) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.drawChild(ViewGroup.java:1638) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1367) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.View.draw(View.java:6743) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.widget.FrameLayout.draw(FrameLayout.java:352) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.android.internal.policy.impl.PhoneWindow$DecorView.draw(PhoneWindow.java:1842) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewRoot.draw(ViewRoot.java:1407) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewRoot.performTraversals(ViewRoot.java:1163) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.view.ViewRoot.handleMessage(ViewRoot.java:1727) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.os.Handler.dispatchMessage(Handler.java:99) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.os.Looper.loop(Looper.java:123) 11-29 17:40:43.280: E/AndroidRuntime(371): at android.app.ActivityThread.main(ActivityThread.java:4627) 11-29 17:40:43.280: E/AndroidRuntime(371): at java.lang.reflect.Method.invokeNative(Native Method) 11-29 17:40:43.280: E/AndroidRuntime(371): at java.lang.reflect.Method.invoke(Method.java:521) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 11-29 17:40:43.280: E/AndroidRuntime(371): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 11-29 17:40:43.280: E/AndroidRuntime(371): at dalvik.system.NativeStart.main(Native Method) 11-29 17:40:45.779: D/dalvikvm(371): GC_FOR_MALLOC freed 5970 objects / 506624 bytes in 1179ms 11-29 17:40:45.779: I/dalvikvm-heap(371): Grow heap (frag case) to 3.147MB for 17858-byte allocation 11-29 17:40:45.870: D/dalvikvm(371): GC_FOR_MALLOC freed 56 objects / 2304 bytes in 92ms 11-29 17:40:45.960: D/dalvikvm(371): GC_EXPLICIT freed 3459 objects / 196432 bytes in 74ms 11-29 17:40:48.310: D/dalvikvm(371): GC_EXPLICIT freed 116 objects / 41448 bytes in 68ms 11-29 17:40:49.540: I/Process(371): Sending signal. PID: 371 SIG: 9

    Read the article

  • Collision problems with drag-n-drop puzzle game.

    - by Amplify91
    I am working on an Android game similar to the Rush Hour/Traffic Jam/Blocked puzzle games. The board is a square containing rectangular pieces. Long pieces may only move horizontally, and tall pieces may only move vertically. The object is to free the red piece and move it out of the board. This game is only my second ever programming project in any language, so any tips or best practices would be appreciated along with your answer. I have a class for the game pieces called Pieces that describes how they are sized and drawn to the screen, gives them drag-and-drop functionality, and detects and handles collisions. I then have an activity class called GameView which creates my layout and creates Pieces objects to add to a RelativeLayout called Board. I have considered making Board its own class, but haven't needed to yet. Here's what my work in progress looks like: My Question: Most of this works perfectly fine except for my collision handling. It seems to be detecting collisions well but instead of pushing the pieces outside of each other when there is a collision, it frantically snaps back and forth between (what seems to be) where the piece is being dragged to and where it should be. It looks something like this: Another oddity: when the dragged piece collides with a piece to its left, the collision handling seems to work perfectly. Only piece above, below, and to the right cause problems. Here's the collision code: @Override public boolean onTouchEvent(MotionEvent event){ float eventX = event.getX(); float eventY = event.getY(); switch (event.getAction()) { case MotionEvent.ACTION_DOWN: //check if touch is on piece if (eventX > x && eventX < (x+width) && eventY > y && eventY < (y+height)){ initialX=x; initialY=y; break; }else{ return false; } case MotionEvent.ACTION_MOVE: //determine if piece should move horizontally or vertically if(width>height){ for (Pieces piece : aPieces) { //if object equals itself in array, skip to next object if(piece==this){ continue; } //if next to another piece, //do not allow to move any further towards said piece if(eventX<x&&(x==piece.right+1)){ return false; }else if(eventX>x&&(x==piece.x-width-1)){ return false; } //move normally if no collision //if collision, do not allow to move through other piece if(collides(this,piece)==false){ x = (eventX-(width/2)); }else if(collidesLeft(this,piece)){ x = piece.right+1; break; }else if(collidesRight(this,piece)){ x = piece.x-width-1; break; } } break; }else if(height>width){ for (Pieces piece : aPieces) { if(piece==this){ continue; }else if(collides(this,piece)==false){ y = (eventY-(height/2)); }else if(collidesUp(this,piece)){ y = piece.bottom+1; break; }else if(collidesDown(this,piece)){ y = piece.y-height-1; break; } } } invalidate(); break; case MotionEvent.ACTION_UP: // end move if(this.moves()){ GameView.counter++; } initialX=x; initialY=y; break; } // parse puzzle invalidate(); return true; } This takes place during onDraw: width = sizedBitmap.getWidth(); height = sizedBitmap.getHeight(); right = x+width; bottom = y+height; My collision-test methods look like this with different math for each: private boolean collidesDown(Pieces piece1, Pieces piece2){ float x1 = piece1.x; float y1 = piece1.y; float r1 = piece1.right; float b1 = piece1.bottom; float x2 = piece2.x; float y2 = piece2.y; float r2 = piece2.right; float b2 = piece2.bottom; if((y1<y2)&&(y1<b2)&&(b1>=y2)&&(b1<b2)&&((x1>=x2&&x1<=r2)||(r1>=x2&&x1<=r2))){ return true; }else{ return false; } } private boolean collides(Pieces piece1, Pieces piece2){ if(collidesLeft(piece1,piece2)){ return true; }else if(collidesRight(piece1,piece2)){ return true; }else if(collidesUp(piece1,piece2)){ return true; }else if(collidesDown(piece1,piece2)){ return true; }else{ return false; } } As a second question, should my x,y,right,bottom,width,height variables be ints instead of floats like they are now? Also, any suggestions on how to implement things better would be greatly appreciated, even if not relevant to the question! Thanks in advance for the help and for sitting through such a long question! Update: I have gotten it working almost perfectly with the following code (this doesn't include the code for vertical pieces): @Override public boolean onTouchEvent(MotionEvent event){ float eventX = event.getX(); float eventY = event.getY(); switch (event.getAction()) { case MotionEvent.ACTION_DOWN: //check if touch is on piece if (eventX > x && eventX < (x+width) && eventY > y && eventY < (y+height)){ initialX=x; initialY=y; break; }else{ return false; } case MotionEvent.ACTION_MOVE: //determine if piece should move horizontally or vertically if(width>height){ for (Pieces piece : aPieces) { //if object equals itself in array, skip to next object if(piece==this){ continue; } //check if there the possibility for a horizontal collision if(this.isAllignedHorizontallyWith(piece)){ //check for and handle collisions while moving left if(this.isRightOf(piece)){ if(eventX>piece.right+(width/2)){ x = (int)(eventX-(width/2)); //move normally }else{ x = piece.right+1; } } //check for and handle collisions while moving right if(this.isLeftOf(piece)){ if(eventX<piece.x-(width/2)){ x = (int)(eventX-(width/2)); }else{ x = piece.x-width-1; } } break; }else{ x = (int)(eventX-(width/2)); } The only problem with this code is that it only detects collisions between the moving piece and one other (with preference to one on the left). If there is a piece to collide with on the left and another on the right, it will only detect collisions with the one on the left. I think this is because once it finds a possible collision, it handles it without finishing looping through the array holding all the pieces. How do I get it to check for multiple possible collisions at the same time?

    Read the article

  • while uploading image I get errors in logcat

    - by al0ne evenings
    I am uploading image and also sending some parameters with it. I am getting errors in my logcat while doing so. Here is my code public class Camera extends Activity { ImageView ivUserImage; Button bUpload; Intent i; int CameraResult = 0; Bitmap bmp; public String globalUID; UserFunctions userFunctions; String photoName; InputStream is; int serverResponseCode = 0; ProgressDialog dialog = null; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.camera); userFunctions = new UserFunctions(); globalUID = getIntent().getExtras().getString("globalUID"); Toast.makeText(getApplicationContext(), globalUID, Toast.LENGTH_LONG).show(); ivUserImage = (ImageView)findViewById(R.id.ivUserImage); bUpload = (Button)findViewById(R.id.bUpload); openCamera(); } private void openCamera() { i = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(i, CameraResult); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(resultCode == RESULT_OK) { //set image taken from camera on ImageView Bundle extras = data.getExtras(); bmp = (Bitmap) extras.get("data"); ivUserImage.setImageBitmap(bmp); //Create new Cursor to obtain the file Path for the large image String[] largeFileProjection = { MediaStore.Images.ImageColumns._ID, MediaStore.Images.ImageColumns.DATA }; String largeFileSort = MediaStore.Images.ImageColumns._ID + " DESC"; //final String photoName = MediaStore.Images.ImageColumns.BUCKET_DISPLAY_NAME; Cursor myCursor = this.managedQuery(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, largeFileProjection, null, null, largeFileSort); try { myCursor.moveToFirst(); //This will actually give you the file path location of the image. final String largeImagePath = myCursor.getString(myCursor.getColumnIndexOrThrow(MediaStore.Images.ImageColumns.DATA)); //final String photoName = myCursor.getString(myCursor.getColumnIndexOrThrow(MediaStore.Images.ImageColumns.BUCKET_DISPLAY_NAME)); //Log.e("URI", largeImagePath); File f = new File("" + largeImagePath); photoName = f.getName(); bUpload.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { dialog = ProgressDialog.show(Camera.this, "", "Uploading file...", true); new Thread(new Runnable() { public void run() { runOnUiThread(new Runnable() { public void run() { //tv.setText("uploading started....."); Toast.makeText(getBaseContext(), "File is uploading...", Toast.LENGTH_LONG).show(); } }); if(imageUpload(globalUID, largeImagePath)) { Toast.makeText(getApplicationContext(), "Image uploaded", Toast.LENGTH_LONG).show(); } else { Toast.makeText(getApplicationContext(), "Error uploading image", Toast.LENGTH_LONG).show(); } //Log.e("Response: ", response); //System.out.println("RES : " + response); } }).start(); } }); } finally { myCursor.close(); } //Uri uriLargeImage = Uri.withAppendedPath(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, String.valueOf(imageId)); //String imageUrl = getRealPathFromURI(uriLargeImage); } } public boolean imageUpload(String uid, String imagepath) { boolean success = false; //Bitmap bitmapOrg = BitmapFactory.decodeResource(getResources(), R.drawable.ic_launcher); Bitmap bitmapOrg = BitmapFactory.decodeFile(imagepath); ByteArrayOutputStream bao = new ByteArrayOutputStream(); bitmapOrg.compress(Bitmap.CompressFormat.JPEG, 90, bao); byte [] ba = bao.toByteArray(); String ba1=Base64.encodeToString(ba, 0); ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("image",ba1)); nameValuePairs.add(new BasicNameValuePair("uid", uid)); try { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost("http://www.example.info/androidfileupload/index.php"); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); if(response != null) { success = true; } is = entity.getContent(); } catch(Exception e) { Log.e("log_tag", "Error in http connection "+e.toString()); } dialog.dismiss(); return success; } Here is my logcat 06-23 11:54:48.555: E/AndroidRuntime(30601): FATAL EXCEPTION: Thread-11 06-23 11:54:48.555: E/AndroidRuntime(30601): java.lang.OutOfMemoryError 06-23 11:54:48.555: E/AndroidRuntime(30601): at java.lang.String.<init>(String.java:513) 06-23 11:54:48.555: E/AndroidRuntime(30601): at java.lang.AbstractStringBuilder.toString(AbstractStringBuilder.java:650) 06-23 11:54:48.555: E/AndroidRuntime(30601): at java.lang.StringBuilder.toString(StringBuilder.java:664) 06-23 11:54:48.555: E/AndroidRuntime(30601): at org.apache.http.client.utils.URLEncodedUtils.format(URLEncodedUtils.java:170) 06-23 11:54:48.555: E/AndroidRuntime(30601): at org.apache.http.client.entity.UrlEncodedFormEntity.<init>(UrlEncodedFormEntity.java:71) 06-23 11:54:48.555: E/AndroidRuntime(30601): at com.zafar.login.Camera.imageUpload(Camera.java:155) 06-23 11:54:48.555: E/AndroidRuntime(30601): at com.zafar.login.Camera$1$1.run(Camera.java:119) 06-23 11:54:48.555: E/AndroidRuntime(30601): at java.lang.Thread.run(Thread.java:1019) 06-23 11:54:53.960: E/WindowManager(30601): Activity com.zafar.login.DashboardActivity has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView@405db540 that was originally added here 06-23 11:54:53.960: E/WindowManager(30601): android.view.WindowLeaked: Activity com.zafar.login.DashboardActivity has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView@405db540 that was originally added here 06-23 11:54:53.960: E/WindowManager(30601): at android.view.ViewRoot.<init>(ViewRoot.java:266) 06-23 11:54:53.960: E/WindowManager(30601): at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:174) 06-23 11:54:53.960: E/WindowManager(30601): at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:117) 06-23 11:54:53.960: E/WindowManager(30601): at android.view.Window$LocalWindowManager.addView(Window.java:424) 06-23 11:54:53.960: E/WindowManager(30601): at android.app.Dialog.show(Dialog.java:241) 06-23 11:54:53.960: E/WindowManager(30601): at android.app.ProgressDialog.show(ProgressDialog.java:107) 06-23 11:54:53.960: E/WindowManager(30601): at android.app.ProgressDialog.show(ProgressDialog.java:90) 06-23 11:54:53.960: E/WindowManager(30601): at com.zafar.login.Camera$1.onClick(Camera.java:110) 06-23 11:54:53.960: E/WindowManager(30601): at android.view.View.performClick(View.java:2538) 06-23 11:54:53.960: E/WindowManager(30601): at android.view.View$PerformClick.run(View.java:9152) 06-23 11:54:53.960: E/WindowManager(30601): at android.os.Handler.handleCallback(Handler.java:587) 06-23 11:54:53.960: E/WindowManager(30601): at android.os.Handler.dispatchMessage(Handler.java:92) 06-23 11:54:53.960: E/WindowManager(30601): at android.os.Looper.loop(Looper.java:130) 06-23 11:54:53.960: E/WindowManager(30601): at android.app.ActivityThread.main(ActivityThread.java:3691) 06-23 11:54:53.960: E/WindowManager(30601): at java.lang.reflect.Method.invokeNative(Native Method) 06-23 11:54:53.960: E/WindowManager(30601): at java.lang.reflect.Method.invoke(Method.java:507) 06-23 11:54:53.960: E/WindowManager(30601): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:907) 06-23 11:54:53.960: E/WindowManager(30601): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) 06-23 11:54:53.960: E/WindowManager(30601): at dalvik.system.NativeStart.main(Native Method) 06-23 11:54:55.190: I/Process(30601): Sending signal. PID: 30601 SIG: 9

    Read the article

  • how to display bitmaps in listview?

    - by mary
    hi i want to show images downloaded in listview.images downloaded with function DownloadImage and are as bitmap.how to show in listview . name photoes with book_id in tabel book are aqual.i want each book has its own image. i can show in listview book_name and book_price just the problem with image book please help me class: package bookstore.category; import java.io.IOException; import java.io.InputStream; import java.net.HttpURLConnection; import java.net.URL; import java.net.URLConnection; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import org.apache.http.NameValuePair; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Typeface; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.widget.ImageView; import android.widget.ListAdapter; import android.widget.ListView; import android.widget.SimpleAdapter; import bookstore.pack.JSONParser; import bookstore.pack.R; import android.app.Activity; import android.app.ProgressDialog; public class Computer extends Activity { Bitmap bm = null; // progress dialog private ProgressDialog pDialog; // Creating JSON Parser object JSONParser jParser = new JSONParser(); ArrayList<HashMap<String, String>> computerBookList; private static String url_books = "http://10.0.2.2/project/computer.php"; // JSON Node names private static final String TAG_SUCCESS = "success"; private static final String TAG_BOOK = "book"; private static final String TAG_BOOK_NAME = "book_name"; private static final String TAG_BOOK_PRICE = "book_price"; private static final String TAG_BOOK_ID = "book_id"; private static final String TAG_MESSAGE = "massage"; // category JSONArray JSONArray book = null; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.category); Typeface font1 = Typeface.createFromAsset(getAssets(), "font/bnazanin.TTF"); // Hashmap for ListView computerBookList = new ArrayList<HashMap<String, String>>(); new LoadBook().execute(); } class LoadBook extends AsyncTask<String, String, String> { /** * Before starting background thread Show Progress Dialog * */ @Override protected void onPreExecute() { super.onPreExecute(); pDialog = new ProgressDialog(Computer.this); pDialog.setMessage("Please wait..."); pDialog.setIndeterminate(false); pDialog.setCancelable(false); pDialog.show(); } protected String doInBackground(String... args) { // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); // getting JSON string from URL JSONObject json = jParser.makeHttpRequest(url_books, "GET", params); // Check your log cat for JSON reponse Log.d("book:", json.toString()); try { // Checking for SUCCESS TAG int success = json.getInt(TAG_SUCCESS); if (success == 1) { DownloadImage("10.0.2.2/project/images/100.png"); DownloadImage("10.0.2.2/project/images/101.png"); DownloadImage("10.0.2.2/project/images/102.png"); DownloadImage("10.0.2.2/project/images/103.png"); DownloadImage("10.0.2.2/project/images/104.png"); DownloadImage("10.0.2.2/project/images/105.png"); DownloadImage("10.0.2.2/project/images/106.png"); DownloadImage("10.0.2.2/project/images/107.png"); DownloadImage("10.0.2.2/project/images/108.png"); DownloadImage("10.0.2.2/project/images/109.png"); DownloadImage("10.0.2.2/project/images/110.png"); // books found book = json.getJSONArray(TAG_BOOK); for (int i = 0; i < book.length(); i++) { JSONObject c = book.getJSONObject(i); // Storing each json item in variable String book_name = c.getString(TAG_BOOK_NAME); String book_price = c.getString(TAG_BOOK_PRICE); String book_id = c.getString(TAG_BOOK_ID); // creating new HashMap HashMap<String, String> map = new HashMap<String, String>(); // adding each child node to HashMap key => value map.put(TAG_BOOK_NAME, book_name); map.put(TAG_BOOK_PRICE, book_price); // map.put(TAG_AUTHOR_NAME, author_name); // adding HashList to ArrayList computerBookList.add(map); } return json.getString(TAG_MESSAGE); } else { System.out.println("no book found"); } } catch (JSONException e) { e.printStackTrace(); } return null; } /** * After completing background task Dismiss the progress dialog * **/ protected void onPostExecute(String file_url) { pDialog.dismiss(); // updating UI from Background Thread runOnUiThread(new Runnable() { ListView view1 = (ListView) findViewById(R.id.list_view); public void run() { ImageView iv = (ImageView) findViewById(R.id.list_image); // bm=BitmapFactory.decodeResource(getResources(), resId); //bm=BitmapFactory.decodeResource(null,R.id.list_image); // iv.setImageBitmap(bm); /* * */ /** * Updating parsed JSON data into ListView * */ ListAdapter adapter = new SimpleAdapter(Computer.this, computerBookList, R.layout.search_item, new String[] { TAG_BOOK_NAME, TAG_BOOK_PRICE }, new int[] { R.id.book_name, R.id.book_price }); view1.setAdapter(adapter); } }); } } private Bitmap DownloadImage(String URL) { Bitmap bitmap = null; InputStream in = null; try { in = OpenHttpConnection(URL); bitmap = BitmapFactory.decodeStream(in); in.close(); } catch (IOException e1) { e1.printStackTrace(); } return bitmap; } private InputStream OpenHttpConnection(String urlString) throws IOException { InputStream in = null; int response = -1; URL url = new URL(urlString); URLConnection conn = url.openConnection(); if (!(conn instanceof HttpURLConnection)) throw new IOException("Not an HTTP connection"); try { HttpURLConnection httpConn = (HttpURLConnection) conn; httpConn.setAllowUserInteraction(false); httpConn.setInstanceFollowRedirects(true); httpConn.setRequestMethod("GET"); httpConn.connect(); response = httpConn.getResponseCode(); if (response == HttpURLConnection.HTTP_OK) { in = httpConn.getInputStream(); } } catch (Exception ex) { throw new IOException("Error connecting"); } return in; } }

    Read the article

  • Uploading multiple files using Spring MVC 3.0.2 after HiddenHttpMethodFilter has been enabled

    - by Tiny
    I'm using Spring version 3.0.2. I need to upload multiple files using the multiple="multiple" attribute of a file browser such as, <input type="file" id="myFile" name="myFile" multiple="multiple"/> (and not using multiple file browsers something like the one stated by this answer, it indeed works I tried). Although no versions of Internet Explorer supports this approach unless an appropriate jQuery plugin/widget is used, I don't care about it right now (since most other browsers support this). This works fine with commons fileupload but in addition to using RequestMethod.POST and RequestMethod.GET methods, I also want to use other request methods supported and suggested by Spring like RequestMethod.PUT and RequestMethod.DELETE in their own appropriate places. For this to be so, I have configured Spring with HiddenHttpMethodFilter which goes fine as this question indicates. but it can upload only one file at a time even though multiple files in the file browser are chosen. In the Spring controller class, a method is mapped as follows. @RequestMapping(method={RequestMethod.POST}, value={"admin_side/Temp"}) public String onSubmit(@RequestParam("myFile") List<MultipartFile> files, @ModelAttribute("tempBean") TempBean tempBean, BindingResult error, Map model, HttpServletRequest request, HttpServletResponse response) throws IOException, FileUploadException { for(MultipartFile file:files) { System.out.println(file.getOriginalFilename()); } } Even with the request parameter @RequestParam("myFile") List<MultipartFile> files which is a List of type MultipartFile (it can always have only one file at a time). I could find a strategy which is likely to work with multiple files on this blog. I have gone through it carefully. The solution below the section SOLUTION 2 – USE THE RAW REQUEST says, If however the client insists on using the same form input name such as ‘files[]‘ or ‘files’ and then populating that name with multiple files then a small hack is necessary as follows. As noted above Spring 2.5 throws an exception if it detects the same form input name of type file more than once. CommonsFileUploadSupport – the class which throws that exception is not final and the method which throws that exception is protected so using the wonders of inheritance and subclassing one can simply fix/modify the logic a little bit as follows. The change I’ve made is literally one word representing one method invocation which enables us to have multiple files incoming under the same form input name. It attempts to override the method protected MultipartParsingResult parseFileItems(List fileItems, String encoding) {} of the abstract class CommonsFileUploadSupport by extending the class CommonsMultipartResolver such as, package multipartResolver; import java.io.UnsupportedEncodingException; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import javax.servlet.ServletContext; import org.apache.commons.fileupload.FileItem; import org.springframework.util.StringUtils; import org.springframework.web.multipart.MultipartException; import org.springframework.web.multipart.MultipartFile; import org.springframework.web.multipart.commons.CommonsMultipartFile; import org.springframework.web.multipart.commons.CommonsMultipartResolver; final public class MultiCommonsMultipartResolver extends CommonsMultipartResolver { public MultiCommonsMultipartResolver() { } public MultiCommonsMultipartResolver(ServletContext servletContext) { super(servletContext); } @Override @SuppressWarnings("unchecked") protected MultipartParsingResult parseFileItems(List fileItems, String encoding) { Map<String, MultipartFile> multipartFiles = new HashMap<String, MultipartFile>(); Map multipartParameters = new HashMap(); // Extract multipart files and multipart parameters. for (Iterator it = fileItems.iterator(); it.hasNext();) { FileItem fileItem = (FileItem) it.next(); if (fileItem.isFormField()) { String value = null; if (encoding != null) { try { value = fileItem.getString(encoding); } catch (UnsupportedEncodingException ex) { if (logger.isWarnEnabled()) { logger.warn("Could not decode multipart item '" + fileItem.getFieldName() + "' with encoding '" + encoding + "': using platform default"); } value = fileItem.getString(); } } else { value = fileItem.getString(); } String[] curParam = (String[]) multipartParameters.get(fileItem.getFieldName()); if (curParam == null) { // simple form field multipartParameters.put(fileItem.getFieldName(), new String[] { value }); } else { // array of simple form fields String[] newParam = StringUtils.addStringToArray(curParam, value); multipartParameters.put(fileItem.getFieldName(), newParam); } } else { // multipart file field CommonsMultipartFile file = new CommonsMultipartFile(fileItem); if (multipartFiles.put(fileItem.getName(), file) != null) { throw new MultipartException("Multiple files for field name [" + file.getName() + "] found - not supported by MultipartResolver"); } if (logger.isDebugEnabled()) { logger.debug("Found multipart file [" + file.getName() + "] of size " + file.getSize() + " bytes with original filename [" + file.getOriginalFilename() + "], stored " + file.getStorageDescription()); } } } return new MultipartParsingResult(multipartFiles, multipartParameters); } } What happens is that the last line in the method parseFileItems() (the return statement) i.e. return new MultipartParsingResult(multipartFiles, multipartParameters); causes a compile-time error because the first parameter multipartFiles is a type of Map implemented by HashMap but in reality, it requires a parameter of type MultiValueMap<String, MultipartFile> It is a constructor of a static class inside the abstract class CommonsFileUploadSupport, public abstract class CommonsFileUploadSupport { protected static class MultipartParsingResult { public MultipartParsingResult(MultiValueMap<String, MultipartFile> mpFiles, Map<String, String[]> mpParams) { } } } The reason might be - this solution is about the Spring version 2.5 and I'm using the Spring version 3.0.2 which might be inappropriate for this version. I however tried to replace the Map with MultiValueMap in various ways such as the one shown in the following segment of code, MultiValueMap<String, MultipartFile>mul=new LinkedMultiValueMap<String, MultipartFile>(); for(Entry<String, MultipartFile>entry:multipartFiles.entrySet()) { mul.add(entry.getKey(), entry.getValue()); } return new MultipartParsingResult(mul, multipartParameters); but no success. I'm not sure how to replace Map with MultiValueMap and even doing so could work either. After doing this, the browser shows the Http response, HTTP Status 400 - type Status report message description The request sent by the client was syntactically incorrect (). Apache Tomcat/6.0.26 I have tried to shorten the question as possible as I could and I haven't included unnecessary code. How could be made it possible to upload multiple files after Spring has been configured with HiddenHttpMethodFilter? That blog indicates that It is a long standing, high priority bug. If there is no solution regarding the version 3.0.2 (3 or higher) then I have to disable Spring support forever and continue to use commons-fileupolad as suggested by the third solution on that blog omitting the PUT, DELETE and other request methods forever. Just curiously waiting for a solution and/or suggestion. Very little changes to the code in the parseFileItems() method inside the class MultiCommonsMultipartResolver might make it to upload multiple files but I couldn't succeed in my attempts (again with the Spring version 3.0.2 (3 or higher)).

    Read the article

  • How to save/retrieve words to/from SQlite database?

    - by user998032
    Sorry if I repeat my question but I have still had no clues of what to do and how to deal with the question. My app is a dictionary. I assume that users will need to add words that they want to memorise to a Favourite list. Thus, I created a Favorite button that works on two phases: short-click to save the currently-view word into the Favourite list; and long-click to view the Favourite list so that users can click on any words to look them up again. I go for using a SQlite database to store the favourite words but I wonder how I can do this task. Specifically, my questions are: Should I use the current dictionary SQLite database or create a new SQLite database to favorite words? In each case, what codes do I have to write to cope with the mentioned task? Could anyone there kindly help? Here is the dictionary code: package mydict.app; import java.util.ArrayList; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteException; import android.util.Log; public class DictionaryEngine { static final private String SQL_TAG = "[MyAppName - DictionaryEngine]"; private SQLiteDatabase mDB = null; private String mDBName; private String mDBPath; //private String mDBExtension; public ArrayList<String> lstCurrentWord = null; public ArrayList<String> lstCurrentContent = null; //public ArrayAdapter<String> adapter = null; public DictionaryEngine() { lstCurrentContent = new ArrayList<String>(); lstCurrentWord = new ArrayList<String>(); } public DictionaryEngine(String basePath, String dbName, String dbExtension) { //mDBExtension = getResources().getString(R.string.dbExtension); //mDBExtension = dbExtension; lstCurrentContent = new ArrayList<String>(); lstCurrentWord = new ArrayList<String>(); this.setDatabaseFile(basePath, dbName, dbExtension); } public boolean setDatabaseFile(String basePath, String dbName, String dbExtension) { if (mDB != null) { if (mDB.isOpen() == true) // Database is already opened { if (basePath.equals(mDBPath) && dbName.equals(mDBName)) // the opened database has the same name and path -> do nothing { Log.i(SQL_TAG, "Database is already opened!"); return true; } else { mDB.close(); } } } String fullDbPath=""; try { fullDbPath = basePath + dbName + "/" + dbName + dbExtension; mDB = SQLiteDatabase.openDatabase(fullDbPath, null, SQLiteDatabase.OPEN_READWRITE|SQLiteDatabase.NO_LOCALIZED_COLLATORS); } catch (SQLiteException ex) { ex.printStackTrace(); Log.i(SQL_TAG, "There is no valid dictionary database " + dbName +" at path " + basePath); return false; } if (mDB == null) { return false; } this.mDBName = dbName; this.mDBPath = basePath; Log.i(SQL_TAG,"Database " + dbName + " is opened!"); return true; } public void getWordList(String word) { String query; // encode input String wordEncode = Utility.encodeContent(word); if (word.equals("") || word == null) { query = "SELECT id,word FROM " + mDBName + " LIMIT 0,15" ; } else { query = "SELECT id,word FROM " + mDBName + " WHERE word >= '"+wordEncode+"' LIMIT 0,15"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); int indexWordColumn = result.getColumnIndex("Word"); int indexContentColumn = result.getColumnIndex("Content"); if (result != null) { int countRow=result.getCount(); Log.i(SQL_TAG, "countRow = " + countRow); lstCurrentWord.clear(); lstCurrentContent.clear(); if (countRow >= 1) { result.moveToFirst(); String strWord = Utility.decodeContent(result.getString(indexWordColumn)); String strContent = Utility.decodeContent(result.getString(indexContentColumn)); lstCurrentWord.add(0,strWord); lstCurrentContent.add(0,strContent); int i = 0; while (result.moveToNext()) { strWord = Utility.decodeContent(result.getString(indexWordColumn)); strContent = Utility.decodeContent(result.getString(indexContentColumn)); lstCurrentWord.add(i,strWord); lstCurrentContent.add(i,strContent); i++; } } result.close(); } } public Cursor getCursorWordList(String word) { String query; // encode input String wordEncode = Utility.encodeContent(word); if (word.equals("") || word == null) { query = "SELECT id,word FROM " + mDBName + " LIMIT 0,15" ; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE word >= '"+wordEncode+"' LIMIT 0,15"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public Cursor getCursorContentFromId(int wordId) { String query; // encode input if (wordId <= 0) { return null; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE Id = " + wordId ; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public Cursor getCursorContentFromWord(String word) { String query; // encode input if (word == null || word.equals("")) { return null; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE word = '" + word + "' LIMIT 0,1"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public void closeDatabase() { mDB.close(); } public boolean isOpen() { return mDB.isOpen(); } public boolean isReadOnly() { return mDB.isReadOnly(); } } And here is the code below the Favourite button to save to and load the Favourite list: btnAddFavourite = (ImageButton) findViewById(R.id.btnAddFavourite); btnAddFavourite.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Add code here to save the favourite, e.g. in the db. Toast toast = Toast.makeText(ContentView.this, R.string.messageWordAddedToFarvourite, Toast.LENGTH_SHORT); toast.show(); } }); btnAddFavourite.setOnLongClickListener(new View.OnLongClickListener() { @Override public boolean onLongClick(View v) { // Open the favourite Activity, which in turn will fetch the saved favourites, to show them. Intent intent = new Intent(getApplicationContext(), FavViewFavourite.class); intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); getApplicationContext().startActivity(intent); return false; } }); }

    Read the article

  • Executing legacy MSBuild scripts in TFS 2010 Build

    - by Jakob Ehn
    When upgrading from TFS 2008 to TFS 2010, all builds are “upgraded” in the sense that a build definition with the same name is created, and it uses the UpgradeTemplate  build process template to execute the build. This template basically just runs MSBuild on the existing TFSBuild.proj file. The build definition contains a property called ConfigurationFolderPath that points to the TFSBuild.proj file. So, existing builds will run just fine after upgrade. But what if you want to use the new workflow functionality in TFS 2010 Build, but still have a lot of MSBuild scripts that maybe call custom MSBuild tasks that you don’t have the time to rewrite? Then one option is to keep these MSBuild scrips and call them from a TFS 2010 Build workflow. This can be done using the MSBuild workflow activity that is avaiable in the toolbox in the Team Foundation Build Activities section: This activity wraps the call to MSBuild.exe and has the following parameters: Most of these properties are only relevant when actually compiling projects, for example C# project files. When calling custom MSBuild project files, you should focus on these properties: Property Meaning Example CommandLineArguments Use this to send in/override MSBuild properties in your project “/p:MyProperty=SomeValue” or MSBuildArguments (this will let you define the arguments in the build definition or when queuing the build) LogFile Name of the log file where MSbuild will log the output “MyBuild.log” LogFileDropLocation Location of the log file BuildDetail.DropLocation + “\log” Project The project to execute SourcesDirectory + “\BuildExtensions.targets” ResponseFile The name of the MSBuild response file SourcesDirectory + “\BuildExtensions.rsp” Targets The target(s) to execute New String() {“Target1”, “Target2”} Verbosity Logging verbosity Microsoft.TeamFoundation.Build.Workflow.BuildVerbosity.Normal Integrating with Team Build   If your MSBuild scripts tries to use Team Build tasks, they will most likely fail with the above approach. For example, the following MSBuild project file tries to add a build step using the BuildStep task:   <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Target Name="MyTarget"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Name="MyBuildStep" Message="My build step executed" Status="Succeeded"></BuildStep> </Target> </Project> When executing this file using the MSBuild activity, calling the MyTarget, it will fail with the following message: The "Microsoft.TeamFoundation.Build.Tasks.BuildStep" task could not be loaded from the assembly \PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll. Could not load file or assembly 'file:///D:\PrivateAssemblies\Microsoft.TeamFoundation.Build.ProcessComponents.dll' or one of its dependencies. The system cannot find the file specified. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. You can see that the path to the ProcessComponents.dll is incomplete. This is because in the Microsoft.TeamFoundation.Build.targets file the task is referenced using the $(TeamBuildRegPath) property. Also note that the task needs the TeamFounationServerUrl and BuildUri properties. One solution here is to pass these properties in using the Command Line Arguments parameter:   Here we pass in the parameters with the corresponding values from the curent build. The build log shows that the build step has in fact been inserted:   The problem as you probably spted is that the build step is insert at the top of the build log, instead of next to the MSBuild activity call. This is because we are using a legacy team build task (BuildStep), and that is how these are handled in TFS 2010. You can see the same behaviour when running builds that are using the UpgradeTemplate, that cutom build steps shows up at the top of the build log.

    Read the article

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Custom ASP.Net MVC 2 ModelMetadataProvider for using custom view model attributes

    - by SeanMcAlinden
    There are a number of ways of implementing a pattern for using custom view model attributes, the following is similar to something I’m using at work which works pretty well. The classes I’m going to create are really simple: 1. Abstract base attribute 2. Custom ModelMetadata provider which will derive from the DataAnnotationsModelMetadataProvider   Base Attribute MetadataAttribute using System; using System.Web.Mvc; namespace Mvc2Templates.Attributes {     /// <summary>     /// Base class for custom MetadataAttributes.     /// </summary>     public abstract class MetadataAttribute : Attribute     {         /// <summary>         /// Method for processing custom attribute data.         /// </summary>         /// <param name="modelMetaData">A ModelMetaData instance.</param>         public abstract void Process(ModelMetadata modelMetaData);     } } As you can see, the class simple has one method – Process. Process accepts the ModelMetaData which will allow any derived custom attributes to set properties on the model meta data and add items to its AdditionalValues collection.   Custom Model Metadata Provider For a quick explanation of the Model Metadata and how it fits in to the MVC 2 framework, it is basically a set of properties that are usually set via attributes placed above properties on a view model, for example the ReadOnly and HiddenInput attributes. When EditorForModel, DisplayForModel or any of the other EditorFor/DisplayFor methods are called, the ModelMetadata information is used to determine how to display the properties. All of the information available within the model metadata is also available through ViewData.ModelMetadata. The following class derives from the DataAnnotationsModelMetadataProvider built into the mvc 2 framework. I’ve overridden the CreateMetadata method in order to process any custom attributes that may have been placed above a property in a view model.   CustomModelMetadataProvider using System; using System.Collections.Generic; using System.Linq; using System.Web.Mvc; using Mvc2Templates.Attributes; namespace Mvc2Templates.Providers {     public class CustomModelMetadataProvider : DataAnnotationsModelMetadataProvider     {         protected override ModelMetadata CreateMetadata(             IEnumerable<Attribute> attributes,             Type containerType,             Func<object> modelAccessor,             Type modelType,             string propertyName)         {             var modelMetadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);               attributes.OfType<MetadataAttribute>().ToList().ForEach(x => x.Process(modelMetadata));               return modelMetadata;         }     } } As you can see, once the model metadata is created through the base method, a check for any attributes deriving from our new abstract base attribute MetadataAttribute is made, the Process method is then called on any existing custom attributes with the model meta data for the property passed in.   Hooking it up The last thing you need to do to hook it up is set the new CustomModelMetadataProvider as the current ModelMetadataProvider, this is done within the Global.asax Application_Start method. Global.asax protected void Application_Start()         {             AreaRegistration.RegisterAllAreas();               RegisterRoutes(RouteTable.Routes);               ModelMetadataProviders.Current = new CustomModelMetadataProvider();         }   In my next post, I’m going to demonstrate a cool custom attribute that turns a textbox into an ajax driven AutoComplete text box. Hope this is useful. Kind Regards, Sean McAlinden.

    Read the article

  • Dependency Replication with TFS 2010 Build

    - by Jakob Ehn
    Some time ago, I wrote a post about how to implement dependency replication using TFS 2008 Build. We use this for Library builds, where we set up a build definition for a common library, and have the build check the resulting assemblies back into source control. The folder is then branched to the applications that need to reference the common library. See the above post for more details. Of course, we have reimplemented this feature in TFS 2010 Build, which results in a much nicer experience for the developer who wants to setup a new library build. Here is how it looks: There is a separate build process template for library builds registered in all team projects The following properties are used to configure the library build: Deploy Folder in Source Control is the server path where the assemblies should be checked in DeploymentFiles is a list of files and/or extensions to what files to check in. Default here is *.dll;*.pdb which means that all assemblies and debug symbols will be checked in. We can also type for example CommonLibrary.*;SomeOtherAssembly.dll in order to exclude other assemblies You can also see that we are versioning the assemblies as part of the build. This is important, since the resulting assemblies will be deployed together with the referencing application.   When the build executes, it will see of the matching assemblies exist in source control, if not, it will add the files automatically:   After the build has finished, we can see in the history of the TestDeploy folder that the build service account has in fact checked in a new version: Nice!   The implementation of the library build process template is not very complicated, it is a combination of customization of the build process template and some custom activities. We use the generic TFActivity (http://geekswithblogs.net/jakob/archive/2010/11/03/performing-checkins-in-tfs-2010-build.aspx) to check in and out files, but for the part that checks if a file exists and adds it to source control, it was easier to do this in a custom activity:   public sealed class AddFilesToSourceControl : BaseCodeActivity { // Files to add to source control [RequiredArgument] public InArgument<IEnumerable<string>> Files { get; set; } [RequiredArgument] public InArgument<Workspace> Workspace { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { foreach (var file in Files.Get(context)) { if (!File.Exists(file)) { throw new ApplicationException("Could not locate " + file); } var ws = this.Workspace.Get(context); string serverPath = ws.TryGetServerItemForLocalItem(file); if( !String.IsNullOrEmpty(serverPath)) { if (!ws.VersionControlServer.ServerItemExists(serverPath, ItemType.File)) { TrackMessage(context, "Adding file " + file); ws.PendAdd(file); } else { TrackMessage(context, "File " + file + " already exists in source control"); } } else { TrackMessage(context, "No server path for " + file); } } } } This build template is a very nice tool that makes it easy to do dependency replication with TFS 2010. Next, I will add funtionality for automatically merging the assemblies (using ILMerge) as part of the build, we do this to keep the number of references to a minimum.

    Read the article

  • Configuring Fed Authentication Methods in OIF / IdP

    - by Damien Carru
    In this article, I will provide examples on how to configure OIF/IdP to map OAM Authentication Schemes to Federation Authentication Methods, based on the concepts introduced in my previous entry. I will show examples for the three protocols supported by OIF: SAML 2.0 SSO SAML 1.1 SSO OpenID 2.0 Enjoy the reading! Configuration As I mentioned in my previous article, mapping Federation Authentication Methods to OAM Authentication Schemes is protocol dependent, since the methods are defined in the various protocols (SAML 2.0, SAML 1.1, OpenID 2.0). As such, the WLST commands to set those mappings will involve: Either the SP Partner Profile and affect all Partners referencing that profile, which do not override the Federation Authentication Method to OAM Authentication Scheme mappings Or the SP Partner entry, which will only affect the SP Partner It is important to note that if an SP Partner is configured to define one or more Federation Authentication Method to OAM Authentication Scheme mappings, then all the mappings defined in the SP Partner Profile will be ignored. WLST Commands The two OIF WLST commands that can be used to define mapping Federation Authentication Methods to OAM Authentication Schemes are: addSPPartnerProfileAuthnMethod() to define a mapping on an SP Partner Profile, taking as parameters: The name of the SP Partner Profile The Federation Authentication Method The OAM Authentication Scheme name addSPPartnerAuthnMethod() to define a mapping on an SP Partner , taking as parameters: The name of the SP Partner The Federation Authentication Method The OAM Authentication Scheme name Note: I will discuss in a subsequent article the other parameters of those commands. In the next sections, I will show examples on how to use those methods: For SAML 2.0, I will configure the SP Partner Profile, that will apply all the mappings to SP Partners referencing this profile, unless they override mapping definition For SAML 1.1, I will configure the SP Partner. For OpenID 2.0, I will configure the SP/RP Partner SAML 2.0 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 2.0 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use BasicScheme as the Authentication Scheme Map BasicSessionScheme  to  the urn:oasis:names:tc:SAML:2.0:ac:classes:Password Federation Authentication Method Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> BasicScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to BasicScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "BasicScheme") Exit the WLST environment:exit() The user will now be challenged via HTTP Basic Authentication defined in the BasicScheme for AcmeSP. Also, as noted earlier, the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via HTTP Basic Authentication, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping BasicScheme To change the Federation Authentication Method mapping for the BasicScheme to urn:oasis:names:tc:SAML:2.0:ac:classes:Password instead of urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport for the saml20-sp-partner-profile SAML 2.0 SP Partner Profile (the profile to which my AcmeSP Partner is bound to), I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:Password", "BasicScheme") Exit the WLST environment:exit() After authentication via HTTP Basic Authentication, OIF/IdP would now issue an Assertion similar to (see that the AuthnContextClassRef was changed from PasswordProtectedTransport to Password): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:Password                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to OAMLDAPPluginAuthnScheme instead of BasicScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will now be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme and BasicScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods. As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthnContextClassRef set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef> OAMLDAPPluginAuthnScheme                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To add the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport mapping, I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to PasswordProtectedTransport): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> SAML 1.1 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 1.1 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:1.0:am:password to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner to OAMLDAPPluginAuthnScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for the SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods (in the SP Partner Profile). As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="OAMLDAPPluginAuthnScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To map the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password for this SP Partner only, I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> LDAPScheme as Authentication Scheme I will now show that by defining a Federation Authentication Mapping at the Partner level, this now ignores all mappings defined at the SP Partner Profile level. For this test, I will switch the default Authentication Scheme for this SP Partner back to LDAPScheme, and the Assertion issued by OIF/IdP will not be able to map this LDAPScheme to a Federation Authentication Method anymore, since A Federation Authentication Method mapping is defined at the SP Partner level and thus the mappings defined at the SP Partner Profile are ignored The LDAPScheme is not listed in the mapping at the Partner level I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for this SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to LDAPScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="LDAPScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping LDAPScheme at Partner Level To fix this issue, we will need to add the LDAPScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password mapping for this SP Partner only. I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OpenID 2.0 In the OpenID 2.0 flows, the RP must request use of PAPE, in order for OIF/IdP/OP to include PAPE information. For OpenID 2.0, the configuration will involve mapping a list of OpenID 2.0 policies to a list of Authentication Schemes. The WLST command will take a list of policies, delimited by the ',' character, instead of SAML 2.0 or SAML 1.1 where a single Federation Authentication Method had to be specified. Test Setup In this setup, OIF is acting as an IdP/OP and is integrated with a remote OpenID 2.0 SP/RP partner identified by AcmeRP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods (the second one is a custom for this use case) LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. No Federation Authentication Method is defined OOTB for OpenID 2.0, so if the IdP/OP issue an SSO response with a PAPE Response element, it will specify the scheme name instead of Federation Authentication Methods After authentication via FORM, OIF/IdP would issue an SSO Response similar to: https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=LDAPScheme&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D Mapping LDAPScheme To map the LDAP Scheme to the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods, I will execute the addSPPartnerAuthnMethod() method (the policies will be comma separated): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeRP", "http://schemas.openid.net/pape/policies/2007/06/phishing-resistant,http://openid-policies/password-protected", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to the two policies): https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fphishing-resistant+http%3A%2F%2Fopenid-policies%2Fpassword-protected&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D In the next article, I will cover how OIF/IdP can be configured so that an SP can request a specific Federation Authentication Method to challenge the user during Federation SSO.Cheers,Damien Carru

    Read the article

  • SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database

    - by pinaldave
    While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. Before we continue the resolution, let us understand what CXPACKET Wait Stats are. The official definition suggests that CXPACKET Wait Stats occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if a conflict concerning this wait type develops into a problem. (from BOL) In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Note that CXPACKET Wait is done by completed thread and not the one which are unfinished. “Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is also unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query.” Now let us see what the best practices to reduce the CXPACKET Wait Stats are. The suggestions, with which you will find that if you search online through the browser, would play a major role as and might be asked about their jobs In addition, might tell you that you should set ‘maximum degree of parallelism’ to 1. I do agree with these suggestions, too; however, I think this is not the final resolutions. As soon as you set your entire query to run on single CPU, you will get a very bad performance from the queries which are actually performing okay when using parallelism. The best suggestion to this is that you set ‘the maximum degree of parallelism’ to a lower number or 1 (be very careful with this – it can create more problems) but tune the queries which can be benefited from multiple CPU’s. You can use query hint OPTION (MAXDOP 0) to run the server to use parallelism. Here is the two-quick script which helps to resolve these issues: Change MAXDOP at Server Level EXEC sys.sp_configure N'max degree of parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Run Query with all the CPU (using parallelism) USE AdventureWorks GO SELECT * FROM Sales.SalesOrderDetail ORDER BY ProductID OPTION (MAXDOP 0) GO Below is the blog post which will help you to find all the parallel query in your server. SQL SERVER – Find Queries using Parallelism from Cached Plan Please note running Queries in single CPU may worsen your performance and it is not recommended at all. Infect this can be very bad advise. I strongly suggest that you identify the queries which are offending and tune them instead of following any other suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Endless terrain in jMonkey using TerrainGrid fails to render

    - by nightcrawler23
    I have started to learn game development using jMonkey engine. I am able to create single tile of terrain using TerrainQuad but as the next step I'm stuck at making it infinite. I have gone through the wiki and want to use the TerrainGrid class but my code does not seem to work. I have looked around on the web and searched other forums but cannot find any other code example to help. I believe in the below code, ImageTileLoader returns an image which is the heightmap for that tile. I have modified it to return the same image every time. But all I see is a black window. The Namer method is not even called. terrain = new TerrainGrid("terrain", patchSize, 513, new ImageTileLoader(assetManager, new Namer() { public String getName(int x, int y) { //return "Scenes/TerrainMountains/terrain_" + x + "_" + y + ".png"; System.out.println("X = " + x + ", Y = " + y); return "Textures/heightmap.png"; } })); These are my sources: jMonkeyEngine 3 Tutorial (10) - Hello Terrain TerrainGridTest.java ImageTileLoader This is the result when i use TerrainQuad: , My full code: // Sample 10 - How to create fast-rendering terrains from heightmaps, and how to // use texture splatting to make the terrain look good. public class HelloTerrain extends SimpleApplication { private TerrainQuad terrain; Material mat_terrain; private float grassScale = 64; private float dirtScale = 32; private float rockScale = 64; public static void main(String[] args) { HelloTerrain app = new HelloTerrain(); app.start(); } private FractalSum base; private PerturbFilter perturb; private OptimizedErode therm; private SmoothFilter smooth; private IterativeFilter iterate; @Override public void simpleInitApp() { flyCam.setMoveSpeed(200); initMaterial(); AbstractHeightMap heightmap = null; Texture heightMapImage = assetManager.loadTexture("Textures/heightmap.png"); heightmap = new ImageBasedHeightMap(heightMapImage.getImage()); heightmap.load(); int patchSize = 65; //terrain = new TerrainQuad("my terrain", patchSize, 513, heightmap.getHeightMap()); // * This Works but below doesnt work* terrain = new TerrainGrid("terrain", patchSize, 513, new ImageTileLoader(assetManager, new Namer() { public String getName(int x, int y) { //return "Scenes/TerrainMountains/terrain_" + x + "_" + y + ".png"; System.out.println("X = " + x + ", Y = " + y); return "Textures/heightmap.png"; // set to return the sme hieghtmap image. } })); terrain.setMaterial(mat_terrain); terrain.setLocalTranslation(0,-100, 0); terrain.setLocalScale(2f, 1f, 2f); rootNode.attachChild(terrain); TerrainLodControl control = new TerrainLodControl(terrain, getCamera()); terrain.addControl(control); } public void initMaterial() { // TERRAIN TEXTURE material this.mat_terrain = new Material(this.assetManager, "Common/MatDefs/Terrain/HeightBasedTerrain.j3md"); // GRASS texture Texture grass = this.assetManager.loadTexture("Textures/white.png"); grass.setWrap(WrapMode.Repeat); this.mat_terrain.setTexture("region1ColorMap", grass); this.mat_terrain.setVector3("region1", new Vector3f(-10, 0, this.grassScale)); // DIRT texture Texture dirt = this.assetManager.loadTexture("Textures/white.png"); dirt.setWrap(WrapMode.Repeat); this.mat_terrain.setTexture("region2ColorMap", dirt); this.mat_terrain.setVector3("region2", new Vector3f(0, 900, this.dirtScale)); Texture building = this.assetManager.loadTexture("Textures/building.png"); building.setWrap(WrapMode.Repeat); this.mat_terrain.setTexture("slopeColorMap", building); this.mat_terrain.setFloat("slopeTileFactor", 32); this.mat_terrain.setFloat("terrainSize", 513); } }

    Read the article

  • Change Desktop Resolution With a Keyboard Shortcut

    - by Matthew Guay
    Do you find yourself changing your monitor resolution several times a day?  If so, you might like this handy way to set a keyboard shortcut for your most-used resolutions. Most users rarely have to change their screen resolution often, as LCD monitors usually only look best at their native resolution.  But netbooks present a unique situation, as their native resolution is usually only 1024×600.  Some newer netbooks offer higher resolutions which may not looks as crisp as the native resolution but can be handy for using a program that expects a higher resolution.  This is the perfect situation for a keyboard shortcut to help you change the resolution without having to hassle with dialogs and menus each time, and HRC – HotKey Resolution Changer makes it easy to do. Create Keyboard Shortcuts Download the HRC – HotKey Resolution Changer (link below), unzip, and then run HRC.exe in the folder. This will start a tray icon, and will not automatically open the HRC window.  You don’t have to install HRC.  Double-click the tray icon to open it.  Note: Windows 7 automatically hides new tray icons, so if you can’t see it, click the arrow to see the hidden tray icons. By default, HRC will show two entries with your default resolutions, color depth, and refresh rate. Add a keyboard shortcut by clicking the Change button over the resolution.  Press the keyboard shortcut you want to press to switch to that resolution; we entered Ctrl+Alt+1 for our default resolution.  Make sure not to use a keyboard shortcut you use in another application, as this will override it.  Click Set when you’ve entered the hotkey(s) you want. Now, on the second entry, select the resolution you want for your alternate resolution.  The drop-down list will only show your monitor’s supported resolutions, so you don’t have to worry about choosing an incorrect resolution.  You can also set a different color depth or refresh rate for this resolution.  Now add a keyboard shortcut for this resolution as well. You can set keyboard shortcuts for up to 9 different resolutions with HRC.  Click the Select number of HotKeys button on the left, and choose the number of resolutions you want to set.  Here we have unique keyboard shortcuts for our three most-used resolutions on our netbook. HRC must be kept running to use the keyboard shortcuts, so click the Minimize to tray icon which is the second icon to the right.  This will keep it running in the tray. If you want to be able to change your resolution anytime, you’ll want HRC to automatically start with Windows.  Create a shortcut to HRC, and paste it into your Windows startup folder.  You can easily open this folder by entering the following in the Run command or in the address bar in Explorer: %appdata%\Microsoft\Windows\Start Menu\Programs\Startup   Conclusion HRC- HotKey Resolution Changer gives you a great way to quickly change your screen resolution with a keyboard shortcut.  Whether or not you love keyboard shortcuts, this is still a much easier way to switch between your most commonly used resolutions. Download HRC – HotKey Resolution Changer Similar Articles Productive Geek Tips Create a Keyboard Shortcut to Access Hidden Desktop Icons and FilesGet Mac’s Hide Others (cmd+opt+H) Keyboard Shortcut for WindowsHide Desktop Icon Text on Windows 7 or VistaShow Keyboard Shortcut Access Keys in Windows VistaKeyboard Ninja: 21 Keyboard Shortcut Articles TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative

    Read the article

  • Understanding MotionEvent to implement a virtual DPad and Buttons on Android (Multitouch)

    - by Fabio Gomes
    I once implemented a DPad in XNA and now I'm trying to port it to android, put, I still don't get how the touch events work in android, the more I read the more confused I get. Here is the code I wrote so far, it works, but guess that it will only handle one touch point. public boolean onTouchEvent(MotionEvent event) { if (event.getPointerCount() == 0) return true; int touchX = -1; int touchY = -1; pressedDirection = DPadDirection.None; int actionCode = event.getAction() & MotionEvent.ACTION_MASK; if (actionCode == MotionEvent.ACTION_UP) { if (event.getPointerId(0) == idDPad) { pressedDirection = DPadDirection.None; idDPad = -1; } } else if (actionCode == MotionEvent.ACTION_DOWN || actionCode == MotionEvent.ACTION_MOVE) { touchX = (int)event.getX(); touchY = (int)event.getY(); if (rightRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Right; else if (leftRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Left; else if (upRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Up; else if (downRect.contains(touchX, touchY)) pressedDirection = DPadDirection.Down; if (pressedDirection != DPadDirection.None) idDPad = event.getPointerId(0); } return true; } The logic is: Test if there is a "DOWN" or "MOVED" event, then if one of this events collides with one of the 4 rectangles of my DPad, I set the pressedDirectin variable to the side of the touch event, then I read the DPad actual pressed direction in my Update() event on another class. The thing I'm not sure, is how do I get track of the touch points, I store the ID of the touch point which generated the diretion that is being stored (last one), so when this ID is released I set the Direction to None, but I'm really confused about how to handle this in android, here is the code I had in XNA: public override void Update(GameTime gameTime) { PressedDirection = DpadDirection.None; foreach (TouchLocation _touchLocation in TouchPanel.GetState()) { if (_touchLocation.State == TouchLocationState.Released) { if (_touchLocation.Id == _idDPad) { PressedDirection = DpadDirection.None; _idDPad = -1; } } else if (_touchLocation.State == TouchLocationState.Pressed || _touchLocation.State == TouchLocationState.Moved) { _intersectRect.X = (int)_touchLocation.Position.X; _intersectRect.Y = (int)_touchLocation.Position.Y; _intersectRect.Width = 1; _intersectRect.Height = 1; if (_intersectRect.Intersects(_rightRect)) PressedDirection = DpadDirection.Right; else if (_intersectRect.Intersects(_leftRect)) PressedDirection = DpadDirection.Left; else if (_intersectRect.Intersects(_upRect)) PressedDirection = DpadDirection.Up; else if (_intersectRect.Intersects(_downRect)) PressedDirection = DpadDirection.Down; if (PressedDirection != DpadDirection.None) { _idDPad = _touchLocation.Id; continue; } } } base.Update(gameTime); } So, first of all: Am I doing this correctly? if not, why? I don't want my DPad to handle multiple directions, but I still didn't get how to handle the multiple touch points, is the event called for every touch point, or all touch points comes in a single call? I still don't get it.

    Read the article

  • Amazon Product Advertising API SOAP Namespace Changes

    - by Rick Strahl
    About two months ago (twowards the end of February 2012 I think) Amazon decided to change the namespace of the Product Advertising API. The error that would come up was: <ItemSearchResponse > was not expected. If you've used the Amazon Product Advertising API you probably know that Amazon has made it a habit to break the services every few years or so and I guess last month was about the time for another one. Basically the service namespace of the document has been changed and responses from the service just failed outright even though the rest of the schema looks fine. Now I looked around for a while trying to find a recent update to the Product Advertising API - something semi-official looking but everything is dated around 2009. Really??? And it's not just .NET - the newest thing on the sample/APIs is dated early 2011 and a handful of 2010 samples. There are newer full APIs for the 'cloud' offerings, but the Product Advertising API apparently isn't part of that. After searching for quite a bit trying to trace this down myself and trying some of the newer samples (which also failed) I found an obscure forum post that describes the solution of getting past the namespace issue. FWIW, I've been using an old version of the Product Advertising API using the old Microsoft WSE3 services (pre-WCF), which provides some of the WS* security features required by the Amazon service. The fix for this code is to explicitly override the namespace declaration on each of the imported service method signatures. The old service namespace (at least on my build) was: http://webservices.amazon.com/AWSECommerceService/2009-03-31 and it should be changed to: http://webservices.amazon.com/AWSECommerceService/2011-08-01 Change it on the class header:[Microsoft.Web.Services3.Messaging.SoapService("http://webservices.amazon.com/AWSECommerceService/2011-08-01")] [System.Xml.Serialization.XmlIncludeAttribute(typeof(Property[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(BrowseNode[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(TransactionItem[]))] public partial class AWSECommerceService : Microsoft.Web.Services3.Messaging.SoapClient { and on all method signatures:[Microsoft.Web.Services3.Messaging.SoapMethodAttribute("http://soap.amazon.com/ItemSearch")] [return: System.Xml.Serialization.XmlElementAttribute("ItemSearchResponse", Namespace="http://webservices.amazon.com/AWSECommerceService/2011-08-01")] public ItemSearchResponse ItemSearch(ItemSearch ItemSearch1) { Microsoft.Web.Services3.SoapEnvelope results = base.SendRequestResponse("ItemSearch", ItemSearch1); return ((ItemSearchResponse)(results.GetBodyObject(typeof(ItemSearchResponse), this.SoapServiceAttribute.TargetNamespace))); } It's easy to do with a Search and Replace on the above strings. Amazon Services <rant> FWIW, I've not been impressed by Amazon's service offerings. While the services work well, their documentation and tool support is absolutely horrendous. I was recently working with a customer on an old AWS application and their old API had been completely removed with a new API that wasn't even a close match. One old API call resulted in requiring three different APIs to perform the same functionality. We had to re-write the entire piece from scratch essentially. The documentation was downright wrong, and incomplete and so scattered it was next to impossible to follow. The examples weren't examples at all - they're mockups of real service calls with fake data that didn't even provide everything that was required to make same service calls work. Additionally there appears to be just about no public support from Amazon, only peer support which is sparse at best - and getting a hold of somebody at Amazon, even for pay seems to be mythical task. It's a terrible business model they have going. I can't see why anybody would put themselves through this sort of customer and development experience. Sad really, but an experience we see more and more these days. Nobody puts in the time to document anything anymore, leaving it to devs to figure this stuff out over and over again… </rant>© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Extended Logging with Caller Info Attributes

    - by João Angelo
    .NET 4.5 caller info attributes may be one of those features that do not get much airtime, but nonetheless are a great addition to the framework. These attributes will allow you to programmatically access information about the caller of a given method, more specifically, the code file full path, the member name of the caller and the line number at which the method was called. They are implemented by taking advantage of C# 4.0 optional parameters and are a compile time feature so as an added bonus the returned member name is not affected by obfuscation. The main usage scenario will be for tracing and debugging routines as will see right now. In this sample code I’ll be using NLog, but the example is also applicable to other logging frameworks like log4net. First an helper class, without any dependencies and that can be used anywhere to obtain caller information: using System; using System.IO; using System.Runtime.CompilerServices; public sealed class CallerInfo { private CallerInfo(string filePath, string memberName, int lineNumber) { this.FilePath = filePath; this.MemberName = memberName; this.LineNumber = lineNumber; } public static CallerInfo Create( [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { return new CallerInfo(filePath, memberName, lineNumber); } public string FilePath { get; private set; } public string FileName { get { return this.fileName ?? (this.fileName = Path.GetFileName(this.FilePath)); } } public string MemberName { get; private set; } public int LineNumber { get; private set; } public override string ToString() { return string.Concat(this.FilePath, "|", this.MemberName, "|", this.LineNumber); } private string fileName; } Then an extension class specific for NLog Logger: using System; using System.Runtime.CompilerServices; using NLog; public static class LoggerExtensions { public static void TraceMemberEntry( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberEntry(logger, LogLevel.Trace, filePath, memberName, lineNumber); } public static void TraceMemberExit( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberExit(logger, LogLevel.Trace, filePath, memberName, lineNumber); } public static void DebugMemberEntry( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberEntry(logger, LogLevel.Debug, filePath, memberName, lineNumber); } public static void DebugMemberExit( this Logger logger, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { LogMemberExit(logger, LogLevel.Debug, filePath, memberName, lineNumber); } public static void LogMemberEntry( this Logger logger, LogLevel logLevel, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { const string MsgFormat = "Entering member {1} at line {2}"; InternalLog(logger, logLevel, MsgFormat, filePath, memberName, lineNumber); } public static void LogMemberExit( this Logger logger, LogLevel logLevel, [CallerFilePath] string filePath = "", [CallerMemberName] string memberName = "", [CallerLineNumber] int lineNumber = 0) { const string MsgFormat = "Exiting member {1} at line {2}"; InternalLog(logger, logLevel, MsgFormat, filePath, memberName, lineNumber); } private static void InternalLog( Logger logger, LogLevel logLevel, string format, string filePath, string memberName, int lineNumber) { if (logger == null) throw new ArgumentNullException("logger"); if (logLevel == null) throw new ArgumentNullException("logLevel"); logger.Log(logLevel, format, filePath, memberName, lineNumber); } } Finally an usage example: using NLog; internal static class Program { private static readonly Logger Logger = LogManager.GetCurrentClassLogger(); private static void Main(string[] args) { Logger.TraceMemberEntry(); // Compile time feature // Next three lines output the same except for line number Logger.Trace(CallerInfo.Create().ToString()); Logger.Trace(() => CallerInfo.Create().ToString()); Logger.Trace(delegate() { return CallerInfo.Create().ToString(); }); Logger.TraceMemberExit(); } } NOTE: Code for helper class and Logger extension also available here.

    Read the article

  • SharePoint 2010 Sandboxed solution SPGridView

    - by Steve Clements
    If you didn’t know, you probably will soon, the SPGridView is not available in Sandboxed solutions. To be honest there doesn’t seem to be a great deal of information out there about the whys and what nots, basically its not part of the Sandbox SharePoint API. Of course the error message from SharePoint is about as useful as punch in the face… An unexpected error has been encountered in this Web Part.  Error: A Web Part or Web Form Control on this Page cannot be displayed or imported. You don't have Add and Customize Pages permissions required to perform this action …that’s if you have debug=true set, if not the classic “This webpart cannot be added” !! Love that one! but will a little digging you should find something like this… [TypeLoadException: Could not load  type Microsoft.SharePoint.WebControls.SPGridView from assembly 'Microsoft.SharePoint, Version=14.900.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.]   Depending on what you want to do with the SPGridView, this may not help at all.  But I’m looking to inherit the theme of the site and style it accordingly. After spending a bit of time with Chrome’s FireBug I was able to get the required CSS classes.  I created my own class inheriting from GridView (note the lack of a preceding SP!) and simply set the styles in there. Inherit from the standard GridView public class PSGridView : GridView   Set the styles in the contructor… public PSGridView() {     this.CellPadding = 2;     this.CellSpacing = 0;     this.GridLines = GridLines.None;     this.CssClass = "ms-listviewtable";     this.Attributes.Add("style", "border-bottom-style: none; border-right-style: none; width: 100%; border-collapse: collapse; border-top-style: none; border-left-style: none;");       this.HeaderStyle.CssClass = "ms-viewheadertr";          this.RowStyle.CssClass = "ms-itmhover";     this.SelectedRowStyle.CssClass = "s4-itm-selected";     this.RowStyle.Height = new Unit(25); }   Then as you cant override the Columns property setter, a custom method to add the column and set the style… public PSGridView() {     this.CellPadding = 2;     this.CellSpacing = 0;     this.GridLines = GridLines.None;     this.CssClass = "ms-listviewtable";     this.Attributes.Add("style", "border-bottom-style: none; border-right-style: none; width: 100%; border-collapse: collapse; border-top-style: none; border-left-style: none;");       this.HeaderStyle.CssClass = "ms-viewheadertr";          this.RowStyle.CssClass = "ms-itmhover";     this.SelectedRowStyle.CssClass = "s4-itm-selected";     this.RowStyle.Height = new Unit(25); }   And that should be enough to get the nicely styled SPGridView without the need for the SPGridView, but seriously….get the SPGridView in the SandBox!!!   Technorati Tags: Sharepoint 2010,SPGridView,Sandbox Solutions,Sandbox Technorati Tags: Sharepoint 2010,SPGridView,Sandbox Solutions,Sandbox

    Read the article

  • Detecting Process Shutdown/Startup Events through ActivationAgents

    - by Ramkumar Menon
    @10g - This post is motivated by one of my close friends and colleague - who wanted to proactively know when a BPEL process shuts down/re-activates. This typically happens when you have a BPEL Process that has an inbound polling adapter, when the adapter loses connectivity to the source system. Or whatever causes it. One valuable suggestion came in from one of my colleagues - he suggested I write my own ActivationAgent to do the job. Well, it really worked. Here is a sample ActivationAgent that you can use. There are few methods you need to override from BaseActivationAgent, and you are on your way to receiving notifications/what not, whenever the shutdown/startup events occur. In the example below, I am retrieving the emailAddress property [that is specified in your bpel.xml activationAgent section] and use that to send out an email notification on the activation agent initialization. You could choose to do different things. But bottomline is that you can use the below-mentioned API to access the very same properties that you specify in the bpel.xml. package com.adapter.custom.activation; import com.collaxa.cube.activation.BaseActivationAgent; import com.collaxa.cube.engine.ICubeContext; import com.oracle.bpel.client.BPELProcessId; import java.util.Date; import java.util.Properties; public class LifecycleManagerActivationAgent extends BaseActivationAgent { public BPELProcessId getBPELProcessId() { return super.getBPELProcessId(); } private void handleInit() throws Exception { //Write initialization code here System.err.println("Entered initialization code...."); //e.g. String emailAddress = getActivationAgentDescriptor().getPropertyValue(emailAddress); //send an email sendEmail(emailAddress); } private void handleLoad() throws Exception { //Write load code here System.err.println("Entered load code...."); } private void handleUnload() throws Exception { //Write unload code here System.err.println("Entered unload code...."); } private void handleUninit() throws Exception { //Write uninitialization code here System.err.println("Entered uninitialization code...."); } public void init(ICubeContext icubecontext) throws Exception { super.init(icubecontext); System.err.println("Initializing LifecycleManager Activation Agent ....."); handleInit(); } public void unload(ICubeContext icubecontext) throws Exception { super.unload(icubecontext); System.err.println("Unloading LifecycleManager Activation Agent ....."); handleUnload(); } public void uninit(ICubeContext icubecontext) throws Exception{ super.uninit(icubecontext); System.err.println("Uninitializing LifecycleManager Activation Agent ....."); handleUninit(); } public String getName() { return "Lifecyclemanageractivationagent"; } public void onStateChanged(int i, ICubeContext iCubeContext) { } public void onLifeCycleChanged(int i, ICubeContext iCubeContext) { } public void onUndeployed(ICubeContext iCubeContext) { } public void onServerShutdown() { } } Once you compile this code, generate a jar file and ensure you add it to the server startup classpath. The library is ready for use after the server restarts. To use this activationAgent, add an additional activationAgent entry in the bpel.xml for the BPEL Process that you wish to monitor. After you deploy the process, the ActivationAgent object will be called back whenever the events mentioned in the overridden methods are raised. [init(), load(), unload(), uninit()]. Subsequently, your custom code is executed. Sample bpel.xml illustrating activationAgent definition and property definition. <?xml version="1.0" encoding="UTF-8"? <BPELSuitcase timestamp="1291943469921" revision="1.0" <BPELProcess wsdlPort="{http://xmlns.oracle.com/BPELTest}BPELTestPort" src="BPELTest.bpel" wsdlService="{http://xmlns.oracle.com/BPELTest}BPELTest" id="BPELTest" <partnerLinkBindings <partnerLinkBinding name="client" <property name="wsdlLocation"BPELTest.wsdl</property </partnerLinkBinding <partnerLinkBinding name="test" <property name="wsdlLocation"test.wsdl</property </partnerLinkBinding </partnerLinkBindings <activationAgents <activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="test" <property name="portType"Read_ptt</property </activationAgent <activationAgent className="com.oracle.bpel.activation.LifecycleManagerActivationAgent" partnerLink="test" <property name="emailAddress"[email protected]</property </activationAgent </activationAgents </BPELProcess </BPELSuitcase em

    Read the article

  • Copy TFS Build Definitions between Projects and Collections

    - by Jakob Ehn
    Originally posted on: http://geekswithblogs.net/jakob/archive/2014/06/05/copy-tfs-build-definitions-between-projects-and-collections.aspxThe last couple of years it has become apparent that using multiple team projects in TFS is generally a bad idea. There are of course exceptions to this, but there are a lot ot things that becomes much easier to do when you put all of your projects and team in the same team project. Fellow ALM MVP Martin Hinshelwood has blogged about this several times, as well as other people in the community. In particular, using the backlog and portfolio management tools makes much more sense when everything is located in the same team project. Consolidating multiple team projects into one is not that easy unfortunately, it involves migrating source code, work items, reports etc.  Another thing that also need to be migrated is build definitions. It is possible to clone build definitions within the same team project using the TFS power tools. The Community TFS Build Manager also lets you clone build definitions to other team projects. But there is no tool that allows you to clone/copy a build definition to another collection. So, I whipped up a simple console application that let you do this. The tool can be downloaded from https://onedrive.live.com/redir?resid=EE034C9F620CD58D!8162&authkey=!ACTr56v1QVowzuE&ithint=file%2c.zip   Using CopyTFSBuildDefinitions You use the tool like this: CopyTFSBuildDefinitions  SourceCollectionUrl  SourceTeamProject  BuildDefinitionName  DestinationCollectionUrl  DestinationTeamProject [NewDefinitionName] Arguments SourceCollectionUrl The URL to the TFS collection that contains the team project with the build definition that you want to copy SourceTeamProject The name of the team project that contains the build definition BuildDefinitionName Name of the build definition DestinationCollectionUrl The URL to the TFS collection that contains the team project that you want to copy your build definition to DestinationTeamProject The name of the team project in the destination collection NewDefinitionName (Optional) Use this to override the name of the new build definition. If you don’t specify this, the name will the same as the original one Example: CopyTFSBuildDefinitions  https://jakob.visualstudio.com DemoProject  WebApplication.CI https://anotheraccount.visualstudio.com     Notes Since we are (potentially) create a build definition in a new collection, there is no guarantee that the various paths that are defined in the build definition exist in the new collection. For example, a build definition refers to server paths in TFVC or repos + branches in TFGit. It also refers to build controllers that definitely don’t exist in the new collection. So there will be some cleanup to do after you copy your build definitions. You can fix some of these using the Community TFS Build Manager, for example it is very easy to apply the correct build controller to a set of build definitions The problem stated above also applies to build process templates. However, the tool tries to find a build process template in the new team project with the same file name as the one that existed in the old team project. If it finds one, it will be used for the new build definition. Otherwise is will use the default build template If you want to run the tool for many build definitions, you can use this SQL scripts, compliments of Mr. Scrum/ALM MVP Richard Hundhausen to generate the necessary commands: USE Tfs_Collection GO SELECT 'CopyTFSBuildDefinitions.exe http://SERVER:8080/tfs/collection "' + P.ProjectName + '" "' + REPLACE(BD.DefinitionName,'\','') + '" http://NEWSERVER:8080/tfs/COLLECTION TEAMPROJECT'   FROM tbl_Project P        INNER JOIN tbl_BuildGroup BG on BG.TeamProject = P.ProjectUri        INNER JOIN tbl_BuildDefinition BD on BD.GroupId = BG.GroupId   ORDER BY P.ProjectName, BD.DefinitionName   Hope that helps, let me know if you have any problems with the tool or if you find it useful

    Read the article

  • MvcExtensions - ActionFilter

    - by kazimanzurrashid
    One of the thing that people often complains is dependency injection in Action Filters. Since the standard way of applying action filters is to either decorate the Controller or the Action methods, there is no way you can inject dependencies in the action filter constructors. There are quite a few posts on this subject, which shows the property injection with a custom action invoker, but all of them suffers from the same small bug (you will find the BuildUp is called more than once if the filter implements multiple interface e.g. both IActionFilter and IResultFilter). The MvcExtensions supports both property injection as well as fluent filter configuration api. There are a number of benefits of this fluent filter configuration api over the regular attribute based filter decoration. You can pass your dependencies in the constructor rather than property. Lets say, you want to create an action filter which will update the User Last Activity Date, you can create a filter like the following: public class UpdateUserLastActivityAttribute : FilterAttribute, IResultFilter { public UpdateUserLastActivityAttribute(IUserService userService) { Check.Argument.IsNotNull(userService, "userService"); UserService = userService; } public IUserService UserService { get; private set; } public void OnResultExecuting(ResultExecutingContext filterContext) { // Do nothing, just sleep. } public void OnResultExecuted(ResultExecutedContext filterContext) { Check.Argument.IsNotNull(filterContext, "filterContext"); string userName = filterContext.HttpContext.User.Identity.IsAuthenticated ? filterContext.HttpContext.User.Identity.Name : null; if (!string.IsNullOrEmpty(userName)) { UserService.UpdateLastActivity(userName); } } } As you can see, it is nothing different than a regular filter except that we are passing the dependency in the constructor. Next, we have to configure this filter for which Controller/Action methods will execute: public class ConfigureFilters : ConfigureFiltersBase { protected override void Configure(IFilterRegistry registry) { registry.Register<HomeController, UpdateUserLastActivityAttribute>(); } } You can register more than one filter for the same Controller/Action Methods: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(); You can register the filters for a specific Action method instead of the whole controller: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(c => c.Index()); You can even set various properties of the filter: registry.Register<ControlPanelController, CustomAuthorizeAttribute>( attribute => { attribute.AllowedRole = Role.Administrator; }); The Fluent Filter registration also reduces the number of base controllers in your application. It is very common that we create a base controller and decorate it with action filters and then we create concrete controller(s) so that the base controllers action filters are also executed in the concrete controller. You can do the  same with a single line statement with the fluent filter registration: Registering the Filters for All Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type))); Registering Filters for selected Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type) && (type.Name.StartsWith("Home") || type.Name.StartsWith("Post")))); You can also use the built-in filters in the fluent registration, for example: registry.Register<HomeController, OutputCacheAttribute>(attribute => { attribute.Duration = 60; }); With the fluent filter configuration you can even apply filters to controllers that source code is not available to you (may be the controller is a part of a third part component). That’s it for today, in the next post we will discuss about the Model binding support in MvcExtensions. So stay tuned.

    Read the article

  • All libGDX input statements are returning TRUE at once

    - by MowDownJoe
    I'm fooling around with Box2D and libGDX and running into a peculiar problem with polling for input. Here's the code for the Screen's render() loop: @Override public void render(float delta) { Gdx.gl20.glClearColor(0, 0, .2f, 1); Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT); camera.update(); game.batch.setProjectionMatrix(camera.combined); debugRenderer.render(world, camera.combined); if(Gdx.input.isButtonPressed(Keys.LEFT)){ Gdx.app.log("Input", "Left is being pressed."); pushyThingyBody.applyForceToCenter(-10f, 0); } if(Gdx.input.isButtonPressed(Keys.RIGHT)){ Gdx.app.log("Input", "Right is being pressed."); pushyThingyBody.applyForceToCenter(10f, 0); } world.step((1f/45f), 6, 2); } And the constructor is largely just setting up the World, Box2DDebugRenderer, and all the Bodies in the world: public SandBox(PhysicsSandboxGame game) { this.game = game; camera = new OrthographicCamera(800, 480); camera.setToOrtho(false); world = new World(new Vector2(0, -9.8f), true); debugRenderer = new Box2DDebugRenderer(); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.set(100, 300); body = world.createBody(bodyDef); CircleShape circle = new CircleShape(); circle.setRadius(6f); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = circle; fixtureDef.density = .5f; fixtureDef.friction = .4f; fixtureDef.restitution = .6f; fixture = body.createFixture(fixtureDef); circle.dispose(); BodyDef groundBodyDef = new BodyDef(); groundBodyDef.position.set(new Vector2(0, 10)); groundBody = world.createBody(groundBodyDef); PolygonShape groundBox = new PolygonShape(); groundBox.setAsBox(camera.viewportWidth, 10f); groundBody.createFixture(groundBox, 0f); groundBox.dispose(); BodyDef pushyThingyBodyDef = new BodyDef(); pushyThingyBodyDef.type = BodyType.DynamicBody; pushyThingyBodyDef.position.set(new Vector2(400, 30)); pushyThingyBody = world.createBody(pushyThingyBodyDef); PolygonShape pushyThingyShape = new PolygonShape(); pushyThingyShape.setAsBox(40f, 10f); FixtureDef pushyThingyFixtureDef = new FixtureDef(); pushyThingyFixtureDef.shape = pushyThingyShape; pushyThingyFixtureDef.density = .4f; pushyThingyFixtureDef.friction = .1f; pushyThingyFixtureDef.restitution = .5f; pushyFixture = pushyThingyBody.createFixture(pushyThingyFixtureDef); pushyThingyShape.dispose(); } Testing this on the desktop. Basically, whenever I hit the appropriate keys, neither of the if statements in the loop return true. However, when I click in the window, both statements return true, resulting in a 0 net force on the body. Why is this?

    Read the article

  • Getting input from keyboard

    - by SAMIR BHOGAYTA
    When you type on the keyboard the keystrokes go to a particular application, the active application. The active application receives the input from the keyboard. This means the application has input focus. There are two events for a key on a keyboard, when the key is pressed and when it is released. No it's not a single event as you might expect if you have no prior programming experience, in shooter games for example when you keep the forward key pressed (KeyDown) the player goes forward, and when it isn't pressed (KeyUp) the player stays put. The event that occurs when the key is pressed is called KeyPress. It occurs between KeyDown and KeyUp, and therefore acts similar to KeyDown. Similar to the way we handle OnPaint and other events we also handle the OnKeyDown event (because we want the event to occur when the key is pressed and not when it is released) by overriding it. Try the code below and test it. You will understand the role of each property. protected override void OnKeyDown(KeyEventArgs keyEvent) { // Gets the key code lblKeyCode.Text = "KeyCode: " + keyEvent.KeyCode.ToString(); // Gets the key data; recognizes combination of keys lblKeyData.Text = "KeyData: " + keyEvent.KeyData.ToString(); // Integer representation of KeyData lblKeyValue.Text = "KeyValue: " + keyEvent.KeyValue.ToString(); // Returns true if Alt is pressed lblAlt.Text = "Alt: " + keyEvent.Alt.ToString(); // Returns true if Ctrl is pressed lblCtrl.Text = "Ctrl: " + keyEvent.Control.ToString(); // Returns true if Shift is pressed lblShift.Text = "Shift: " + keyEvent.Shift.ToString(); } How do I find out when the user presses a specific key? As you probably imagine, this will be easily accomplished using 'if'. if (keyEvent.KeyCode == Keys.A) { MessageBox.Show("'A' was pressed."); } Probably most beginners would be tempted to do this: if (keyEvent.KeyCode == "A") .... which is definitely incorrect because we can't compare System.Windows.Forms.Keys to a string. Also note that in the example we are using 'keyEvent.KeyCode', that means that even if we have other shift keys pressed (Alt, Ctrl, Shift, Windows...) simultaneous with A, the if condition returns true because it doesn't recognize key combinations. If we want to ignore key combinations (Alt+A, Ctrl+Shift+A), etc. we need to use 'keyEvent.KeyData' of course: if (keyEvent.KeyData == Keys.A) { MessageBox.Show("'A', and only A, was pressed."); } When you right click on a file in Windows Explorer and you have the Shift key pressed you get the additional 'Open with...' item in the menu. This and many others are cases when you need to use the mouse button together with the keyboard. The following code will change the background color of the form only if the form is clicked while the Ctrl key on the keyboard is pressed. If the Ctrl key is unpressed and the form is clicked nothing happens. private void Form1_Click(object sender, System.EventArgs e) { Keys modKey = Control.ModifierKeys; if(modKey == Keys.Control) { this.BackColor = Color.Yellow; } } If you have further questions feel free to ask them and also check the following pages at MSDN: KeyUp Event KeyPress Event KeyDown Event

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >