Search Results

Search found 21111 results on 845 pages for 'null pointer'.

Page 350/845 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • Android - doInBackground() error in AsyncTask

    - by AimanB
    What my app here basically does is it captures a photo or import from gallery, and when the Upload button is pressed, the image will be uploaded to a localhost server. Before I implemented AsyncTask into the process, it doesn't have any problem uploading whatsoever. Now that I've put AsyncTask, everything went wrong. I don't know which part that I do wrong in this phase. This is what logcat shows when I try to upload an image file: 10-28 17:23:25.989: E/AndroidRuntime(3356): FATAL EXCEPTION: AsyncTask #5 10-28 17:23:25.989: E/AndroidRuntime(3356): java.lang.RuntimeException: An error occured while executing doInBackground() 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$3.done(AsyncTask.java:299) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.setException(FutureTask.java:219) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.run(FutureTask.java:239) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.lang.Thread.run(Thread.java:856) 10-28 17:23:25.989: E/AndroidRuntime(3356): Caused by: java.lang.RuntimeException: Can't create handler inside thread that has not called Looper.prepare() 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.Handler.<init>(Handler.java:197) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.Handler.<init>(Handler.java:111) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast$TN.<init>(Toast.java:324) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast.<init>(Toast.java:91) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.widget.Toast.makeText(Toast.java:238) 10-28 17:23:25.989: E/AndroidRuntime(3356): at com.aiman.webshopper.UploadImageActivity$1execMultiPostAsync.doInBackground(UploadImageActivity.java:268) 10-28 17:23:25.989: E/AndroidRuntime(3356): at com.aiman.webshopper.UploadImageActivity$1execMultiPostAsync.doInBackground(UploadImageActivity.java:1) 10-28 17:23:25.989: E/AndroidRuntime(3356): at android.os.AsyncTask$2.call(AsyncTask.java:287) 10-28 17:23:25.989: E/AndroidRuntime(3356): at java.util.concurrent.FutureTask.run(FutureTask.java:234) This is my code for the Upload activity: public class UploadImageActivity extends Activity implements OnItemSelectedListener { InputStream inputStream; private ImageView imageView; String the_string_response; private static final int SELECT_PICTURE = 0; private static final int CAMERA_REQUEST = 1888; private static final String SERVER_UPLOAD_URI = "...myserver.php"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_upload_image); imageView = (ImageView) findViewById(R.id.imgUpload); } public void capturePhoto(View view) { Intent cameraIntent = new Intent( android.provider.MediaStore.ACTION_IMAGE_CAPTURE); File f = new File(android.os.Environment.getExternalStorageDirectory(), "temp.jpg"); cameraIntent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(f)); startActivityForResult(cameraIntent, CAMERA_REQUEST); } public void pickPhoto(View view) { // TODO: launch the photo picker Intent intent = new Intent(); intent.setType("image/*"); intent.setAction(Intent.ACTION_GET_CONTENT); startActivityForResult(Intent.createChooser(intent, "Select Picture"), SELECT_PICTURE); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) { File f = new File(Environment.getExternalStorageDirectory() .toString()); for (File temp : f.listFiles()) { if (temp.getName().equals("temp.jpg")) { f = temp; break; } } try { BitmapFactory.Options bitmapOptions = new BitmapFactory.Options(); Bitmap bitmap = BitmapFactory.decodeFile(f.getAbsolutePath(), bitmapOptions); imageView.setImageBitmap(bitmap); String path = android.os.Environment .getExternalStorageDirectory() + File.separator + "Phoenix" + File.separator + "default"; f.delete(); OutputStream outFile = null; File file = new File(path, String.valueOf(System .currentTimeMillis()) + ".jpg"); try { outFile = new FileOutputStream(file); bitmap.compress(Bitmap.CompressFormat.JPEG, 85, outFile); outFile.flush(); outFile.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } catch (Exception e) { e.printStackTrace(); } } if (requestCode == SELECT_PICTURE && resultCode == RESULT_OK) { Bitmap bitmap = getPath(data.getData()); imageView.setImageBitmap(bitmap); } } private Bitmap getPath(Uri uri) { String[] projection = { MediaStore.Images.Media.DATA }; Cursor cursor = getContentResolver().query(uri, projection, null, null, null); int column_index = cursor.getColumnIndexOrThrow(projection[0]); cursor.moveToFirst(); String filePath = cursor.getString(column_index); cursor.close(); // Convert file path into bitmap image using below line. Bitmap bitmap = BitmapFactory.decodeFile(filePath); return bitmap; } public void uploadPhoto(View view) { try { executeMultipartPost(); } catch (Exception e) { e.printStackTrace(); } } public void executeMultipartPost() throws Exception { class execMultiPostAsync extends AsyncTask<String, Void, String>{ @Override protected String doInBackground(String... params){ // Choose image here BitmapDrawable drawable = (BitmapDrawable) imageView.getDrawable(); Bitmap bitmap = drawable.getBitmap(); ByteArrayOutputStream stream = new ByteArrayOutputStream(); bitmap.compress(Bitmap.CompressFormat.JPEG, 50, stream); // compress to // which // format // you want. byte[] byte_arr = stream.toByteArray(); String image_str = Base64.encodeBytes(byte_arr); ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("image", image_str)); try { HttpClient httpclient = new DefaultHttpClient(); /* * HttpPost(parameter): Server URI */ HttpPost httppost = new HttpPost(SERVER_UPLOAD_URI); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); the_string_response = convertResponseToString(response); } catch (Exception e) { Toast.makeText(UploadImageActivity.this, "ERROR " + e.getMessage(), Toast.LENGTH_LONG).show(); System.out.println("Error in http connection " + e.toString()); } return the_string_response; } @Override protected void onPostExecute(String result) { super.onPostExecute(result); Toast.makeText(UploadImageActivity.this, "Response " + result, Toast.LENGTH_LONG) .show(); } public String convertResponseToString(HttpResponse response) throws IllegalStateException, IOException { String res = ""; StringBuffer buffer = new StringBuffer(); inputStream = response.getEntity().getContent(); int contentLength = (int) response.getEntity().getContentLength(); // getting // content // lengt Toast.makeText(UploadImageActivity.this, "contentLength : " + contentLength, Toast.LENGTH_LONG).show(); if (contentLength < 0) { } else { byte[] data = new byte[512]; int len = 0; try { while (-1 != (len = inputStream.read(data))) { buffer.append(new String(data, 0, len)); // converting to // string and // appending to // stringbuffer } } catch (IOException e) { e.printStackTrace(); } try { inputStream.close(); // closing the stream } catch (IOException e) { e.printStackTrace(); } res = buffer.toString(); // converting stringbuffer to string Toast.makeText(UploadImageActivity.this, "Result : " + res, Toast.LENGTH_LONG).show(); // System.out.println("Response => " + // EntityUtils.toString(response.getEntity())); } return res; } } execMultiPostAsync exec = new execMultiPostAsync(); exec.execute(); } } Can someone please check if I put the AsyncTask task correctly in this activity? I think I've made a mistake somewhere.

    Read the article

  • Create a class that inherets DrawableGameComponent in XNA as a CLASS with custom functions

    - by user3675013
    using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; namespace TileEngine { class Renderer : DrawableGameComponent { public Renderer(Game game) : base(game) { } SpriteBatch spriteBatch ; protected override void LoadContent() { base.LoadContent(); } public override void Draw(GameTime gameTime) { base.Draw(gameTime); } public override void Update(GameTime gameTime) { base.Update(gameTime); } public override void Initialize() { base.Initialize(); } public RenderTarget2D new_texture(int width, int height) { Texture2D TEX = new Texture2D(GraphicsDevice, width, height); //create the texture to render to RenderTarget2D Mine = new RenderTarget2D(GraphicsDevice, width, height); GraphicsDevice.SetRenderTarget(Mine); //set the render device to the reference provided //maybe base.draw can be used with spritebatch. Idk. We'll see if the order of operation //works out. Wish I could call base.draw here. return Mine; //I'm hoping that this returns the same instance and not a copy. } public void draw_texture(int width, int height, RenderTarget2D Mine) { GraphicsDevice.SetRenderTarget(null); //Set the renderer to render to the backbuffer again Rectangle drawrect = new Rectangle(0, 0, width, height); //Set the rendering size to what we want spriteBatch.Begin(); //This uses spritebatch to draw the texture directly to the screen spriteBatch.Draw(Mine, drawrect, Color.White); //This uses the color white spriteBatch.End(); //ends the spritebatch //Call base.draw after this since it doesn't seem to recognize inside the function //maybe base.draw can be used with spritebatch. Idk. We'll see if the order of operation //works out. Wish I could call base.draw here. } } } I solved a previous issue where I wasn't allowed to access GraphicsDevice outside the main Default 'main' class Ie "Game" or "Game1" etc. Now I have a new issue. FYi no one told me that it would be possible to use GraphicsDevice References to cause it to not be null by using the drawable class. (hopefully after this last bug is solved it doesn't still return null) Anyways at present the problem is that I can't seem to get it to initialize as an instance in my main program. Ie Renderer tileClipping; and I'm unable to use it such as it is to be noted i haven't even gotten to testing these two steps below but before it compiled but when those functions of this class were called it complained that it can't render to a null device. Which meant that the device wasn't being initialized. I had no idea why. It took me hours to google this. I finally figured out the words I needed.. which were "do my rendering in XNA in a seperate class" now I haven't used the addcomponent function because I don't want it to only run these functions automatically and I want to be able to call the custom ones. In a nutshell what I want is: *access to rendering targets and graphics device OUTSIDE default class *passing of Rendertarget2D (which contain textures and textures should automatically be passed with a rendering target? ) *the device should be passed to this function as well OR the device should be passed to this function as a byproduct of passing the rendertarget (which is automatically associated with the render device it was given originally) *I'm assuming I'm dealing with abstracted pointers here so when I pass a class object or instance, I should be recieving the SAME object , I referenced, and not a copy that has only the lifespan of the function running. *the purpose for all these options: I want to initialize new 2d textures on the fly to customize tileclipping and even the X , y Offsets of where a WHOLE texture will be rendered, and the X and Y offsets of where tiles will be rendered ON that surface. This is why. And I'll be doing region based lighting effects per tile or even per 8X8 pixel spaces.. we'll see I'll also be doing sprite rotations on the whole texture then copying it again to a circular masked texture, and then doing a second copy for only solid tiles for masked rotated collisions on sprites. I'll be checking the masked pixels for my collision, and using raycasting possibly to check for collisions on those areas. The sprite will stay in the center, when this rotation happens. Here is a detailed diagram: http://i.stack.imgur.com/INf9K.gif I'll be using texture2D for steps 4-6 I suppose for steps 1 as well. Now ontop of that, the clipping size (IE the sqaure rendered) will be able to be shrunk or increased, on a per frame basis Therefore I can't use the same static size for my main texture2d and I can't use just the backbuffer Or we get the annoying flicker. Also I will have multiple instances of the renderer class so that I can freely pass textures around as if they are playing cards (in a sense) layering them ontop of eachother, cropping them how i want and such. and then using spritebatch to simply draw them at the locations I want. Hopefully this makes sense, and yes I will be planning on using alpha blending but only after all tiles have been drawn.. The masked collision is important and Yes I am avoiding using math on the tile rendering and instead resorting to image manipulation in video memory which is WHY I need this to work the way I'm intending it to work and not in the default way that XNA seems to handle graphics. Thanks to anyone willing to help. I hate the code form offered, because then I have to rely on static presence of an update function. What if I want to kill that update function or that object, but have it in memory, but just have it temporarily inactive? I'm making the assumption here the update function of one of these gamecomponents is automatic ? Anyways this is as detailed as I can make this post hopefully someone can help me solve the issue. Instead of tell me "derrr don't do it this wayyy" which is what a few people told me (but they don't understand the actual goal I have in mind) I'm trying to create basically a library where I can copy images freely no matter the size, i just have to specify the size in the function then as long as a reference to that object exists it should be kept alive? right? :/ anyways.. Anything else? I Don't know. I understand object oriented coding but I don't understand this XNA It's beggining to feel impossible to do anything custom in it without putting ALL my rendering code into the draw function of the main class tileClipping.new_texture(GraphicsDevice, width, height) tileClipping.Draw_texture(...)

    Read the article

  • Creating a dynamic proxy generator with c# – Part 4 – Calling the base method

    - by SeanMcAlinden
    Creating a dynamic proxy generator with c# – Part 1 – Creating the Assembly builder, Module builder and caching mechanism Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design Creating a dynamic proxy generator with c# – Part 3 – Creating the constructors   The plan for calling the base methods from the proxy is to create a private method for each overridden proxy method, this will allow the proxy to use a delegate to simply invoke the private method when required. Quite a few helper classes have been created to make this possible so as usual I would suggest download or viewing the code at http://rapidioc.codeplex.com/. In this post I’m just going to cover the main points for when creating methods. Getting the methods to override The first two notable methods are for getting the methods. private static MethodInfo[] GetMethodsToOverride<TBase>() where TBase : class {     return typeof(TBase).GetMethods().Where(x =>         !methodsToIgnore.Contains(x.Name) &&                              (x.Attributes & MethodAttributes.Final) == 0)         .ToArray(); } private static StringCollection GetMethodsToIgnore() {     return new StringCollection()     {         "ToString",         "GetHashCode",         "Equals",         "GetType"     }; } The GetMethodsToIgnore method string collection contains an array of methods that I don’t want to override. In the GetMethodsToOverride method, you’ll notice a binary AND which is basically saying not to include any methods marked final i.e. not virtual. Creating the MethodInfo for calling the base method This method should hopefully be fairly easy to follow, it’s only function is to create a MethodInfo which points to the correct base method, and with the correct parameters. private static MethodInfo CreateCallBaseMethodInfo<TBase>(MethodInfo method) where TBase : class {     Type[] baseMethodParameterTypes = ParameterHelper.GetParameterTypes(method, method.GetParameters());       return typeof(TBase).GetMethod(        method.Name,        BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic,        null,        baseMethodParameterTypes,        null     ); }   /// <summary> /// Get the parameter types. /// </summary> /// <param name="method">The method.</param> /// <param name="parameters">The parameters.</param> public static Type[] GetParameterTypes(MethodInfo method, ParameterInfo[] parameters) {     Type[] parameterTypesList = Type.EmptyTypes;       if (parameters.Length > 0)     {         parameterTypesList = CreateParametersList(parameters);     }     return parameterTypesList; }   Creating the new private methods for calling the base method The following method outline how I’ve created the private methods for calling the base class method. private static MethodBuilder CreateCallBaseMethodBuilder(TypeBuilder typeBuilder, MethodInfo method) {     string callBaseSuffix = "GetBaseMethod";       if (method.IsGenericMethod || method.IsGenericMethodDefinition)     {                         return MethodHelper.SetUpGenericMethod             (                 typeBuilder,                 method,                 method.Name + callBaseSuffix,                 MethodAttributes.Private | MethodAttributes.HideBySig             );     }     else     {         return MethodHelper.SetupNonGenericMethod             (                 typeBuilder,                 method,                 method.Name + callBaseSuffix,                 MethodAttributes.Private | MethodAttributes.HideBySig             );     } } The CreateCallBaseMethodBuilder is the entry point method for creating the call base method. I’ve added a suffix to the base classes method name to keep it unique. Non Generic Methods Creating a non generic method is fairly simple public static MethodBuilder SetupNonGenericMethod(     TypeBuilder typeBuilder,     MethodInfo method,     string methodName,     MethodAttributes methodAttributes) {     ParameterInfo[] parameters = method.GetParameters();       Type[] parameterTypes = ParameterHelper.GetParameterTypes(method, parameters);       Type returnType = method.ReturnType;       MethodBuilder methodBuilder = CreateMethodBuilder         (             typeBuilder,             method,             methodName,             methodAttributes,             parameterTypes,             returnType         );       ParameterHelper.SetUpParameters(parameterTypes, parameters, methodBuilder);       return methodBuilder; }   private static MethodBuilder CreateMethodBuilder (     TypeBuilder typeBuilder,     MethodInfo method,     string methodName,     MethodAttributes methodAttributes,     Type[] parameterTypes,     Type returnType ) { MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName, methodAttributes, returnType, parameterTypes); return methodBuilder; } As you can see, you simply have to declare a method builder, get the parameter types, and set the method attributes you want.   Generic Methods Creating generic methods takes a little bit more work. /// <summary> /// Sets up generic method. /// </summary> /// <param name="typeBuilder">The type builder.</param> /// <param name="method">The method.</param> /// <param name="methodName">Name of the method.</param> /// <param name="methodAttributes">The method attributes.</param> public static MethodBuilder SetUpGenericMethod     (         TypeBuilder typeBuilder,         MethodInfo method,         string methodName,         MethodAttributes methodAttributes     ) {     ParameterInfo[] parameters = method.GetParameters();       Type[] parameterTypes = ParameterHelper.GetParameterTypes(method, parameters);       MethodBuilder methodBuilder = typeBuilder.DefineMethod(methodName,         methodAttributes);       Type[] genericArguments = method.GetGenericArguments();       GenericTypeParameterBuilder[] genericTypeParameters =         GetGenericTypeParameters(methodBuilder, genericArguments);       ParameterHelper.SetUpParameterConstraints(parameterTypes, genericTypeParameters);       SetUpReturnType(method, methodBuilder, genericTypeParameters);       if (method.IsGenericMethod)     {         methodBuilder.MakeGenericMethod(genericArguments);     }       ParameterHelper.SetUpParameters(parameterTypes, parameters, methodBuilder);       return methodBuilder; }   private static GenericTypeParameterBuilder[] GetGenericTypeParameters     (         MethodBuilder methodBuilder,         Type[] genericArguments     ) {     return methodBuilder.DefineGenericParameters(GenericsHelper.GetArgumentNames(genericArguments)); }   private static void SetUpReturnType(MethodInfo method, MethodBuilder methodBuilder, GenericTypeParameterBuilder[] genericTypeParameters) {     if (method.IsGenericMethodDefinition)     {         SetUpGenericDefinitionReturnType(method, methodBuilder, genericTypeParameters);     }     else     {         methodBuilder.SetReturnType(method.ReturnType);     } }   private static void SetUpGenericDefinitionReturnType(MethodInfo method, MethodBuilder methodBuilder, GenericTypeParameterBuilder[] genericTypeParameters) {     if (method.ReturnType == null)     {         methodBuilder.SetReturnType(typeof(void));     }     else if (method.ReturnType.IsGenericType)     {         methodBuilder.SetReturnType(genericTypeParameters.Where             (x => x.Name == method.ReturnType.Name).First());     }     else     {         methodBuilder.SetReturnType(method.ReturnType);     }             } Ok, there are a few helper methods missing, basically there is way to much code to put in this post, take a look at the code at http://rapidioc.codeplex.com/ to follow it through completely. Basically though, when dealing with generics there is extra work to do in terms of getting the generic argument types setting up any generic parameter constraints setting up the return type setting up the method as a generic All of the information is easy to get via reflection from the MethodInfo.   Emitting the new private method Emitting the new private method is relatively simple as it’s only function is calling the base method and returning a result if the return type is not void. ILGenerator il = privateMethodBuilder.GetILGenerator();   EmitCallBaseMethod(method, callBaseMethod, il);   private static void EmitCallBaseMethod(MethodInfo method, MethodInfo callBaseMethod, ILGenerator il) {     int privateParameterCount = method.GetParameters().Length;       il.Emit(OpCodes.Ldarg_0);       if (privateParameterCount > 0)     {         for (int arg = 0; arg < privateParameterCount; arg++)         {             il.Emit(OpCodes.Ldarg_S, arg + 1);         }     }       il.Emit(OpCodes.Call, callBaseMethod);       il.Emit(OpCodes.Ret); } So in the main method building method, an ILGenerator is created from the method builder. The ILGenerator performs the following actions: Load the class (this) onto the stack using the hidden argument Ldarg_0. Create an argument on the stack for each of the method parameters (starting at 1 because 0 is the hidden argument) Call the base method using the Opcodes.Call code and the MethodInfo we created earlier. Call return on the method   Conclusion Now we have the private methods prepared for calling the base method, we have reached the last of the relatively easy part of the proxy building. Hopefully, it hasn’t been too hard to follow so far, there is a lot of code so I haven’t been able to post it all so please check it out at http://rapidioc.codeplex.com/. The next section should be up fairly soon, it’s going to cover creating the delegates for calling the private methods created in this post.   Kind Regards, Sean.

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Image Preview in ASP.NET MVC

    - by imran_ku07
      Introduction :         Previewing an image is a great way to improve the UI of your site. Also it is always best to check the file type, size and see a preview before submitting the whole form. There are some ways to do this using simple JavaScript but not work in all browsers (like FF3).In this Article I will show you how do this using ASP.NET MVC application. You also see how this will work in case of nested form.   Description :          Create a new ASP.NET MVC project and then add a file upload and image control into your View. <form id="form1" method="post" action="NerdDinner/ImagePreview/AjaxSubmit">            <table>                <tr>                    <td>                        <input type="file" name="imageLoad1" id="imageLoad1"  onchange="ChangeImage(this,'#imgThumbnail')" />                    </td>                </tr>                <tr>                    <td align="center">                        <img src="images/TempImage.gif" id="imgThumbnail" height="200px" width="200px">                     </td>                </tr>            </table>        </form>           Note that here NerdDinner is refers to the virtual directory name, ImagePreview is the Controller and ImageLoad is the action name which you will see shortly          I will use the most popular jQuery form plug-in, that turns a form into an AJAX form with very little code. Therefore you must get these from Jquery site and then add these files into your page.          <script src="NerdDinner/Scripts/jquery-1.3.2.js" type="text/javascript"></script>        <script src="NerdDinner/Scripts/jquery.form.js" type="text/javascript"></script>            Then add the javascript function. <script type="text/javascript">function ChangeImage(fileId,imageId){ $("#form1").ajaxSubmit({success: function(responseText){ var d=new Date(); $(imageId)[0].src="NerdDinner/ImagePreview/ImageLoad?a="+d.getTime(); } });}</script>             This function simply submit the form named form1 asynchronously to ImagePreviewController's method AjaxSubmit and after successfully receiving the response, it will set the image src property to the action method ImageLoad. Here I am also adding querystring, preventing the browser to serve the cached image.           Now I will create a new Controller named ImagePreviewController. public class ImagePreviewController : Controller { [AcceptVerbs(HttpVerbs.Post)] public ActionResult AjaxSubmit(int? id) { Session["ContentLength"] = Request.Files[0].ContentLength; Session["ContentType"] = Request.Files[0].ContentType; byte[] b = new byte[Request.Files[0].ContentLength]; Request.Files[0].InputStream.Read(b, 0, Request.Files[0].ContentLength); Session["ContentStream"] = b; return Content( Request.Files[0].ContentType+";"+ Request.Files[0].ContentLength ); } public ActionResult ImageLoad(int? id) { byte[] b = (byte[])Session["ContentStream"]; int length = (int)Session["ContentLength"]; string type = (string)Session["ContentType"]; Response.Buffer = true; Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = type; Response.BinaryWrite(b); Response.Flush(); Session["ContentLength"] = null; Session["ContentType"] = null; Session["ContentStream"] = null; Response.End(); return Content(""); } }             The AjaxSubmit action method will save the image in Session and return content type and content length in response. ImageLoad action method will return the contents of image in response.Then clear these Sessions.           Just run your application and see the effect.   Checking Size and Content Type of File:          You may notice that AjaxSubmit action method is returning both content type and content length. You can check both properties before submitting your complete form.     $(myform).ajaxSubmit({success: function(responseText)            {                                var contentType=responseText.substring(0,responseText.indexOf(';'));                var contentLength=responseText.substring(responseText.indexOf(';')+1);                // Here you can do your validation                var d=new Date();                $(imageId)[0].src="http://weblogs.asp.net/MoneypingAPP/ImagePreview/ImageLoad?a="+d.getTime();            }        });  Handling Nested Form Case:          The above code will work if you have only one form. But this is not the case always.You may have a form control which wraps all the controls and you do not want to submit the whole form, just for getting a preview effect.           In this case you need to create a dynamic form control using JavaScript, and then add file upload control to this form and submit the form asynchronously  function ChangeImage(fileId,imageId)         {            var myform=document.createElement("form");                    myform.action="NerdDinner/ImagePreview/AjaxSubmit";            myform.enctype="multipart/form-data";            myform.method="post";            var imageLoad=document.getElementById(fileId).cloneNode(true);            myform.appendChild(imageLoad);            document.body.appendChild(myform);            $(myform).ajaxSubmit({success: function(responseText)                {                                    var contentType=responseText.substring(0,responseText.indexOf(';'));                    var contentLength=responseText.substring(responseText.indexOf(';')+1);                    var d=new Date();                    $(imageId)[0].src="http://weblogs.asp.net/MoneypingAPP/ImagePreview/ImageLoad?a="+d.getTime();                    document.body.removeChild(myform);                }            });        }            You also need append the child in order to send request and remove them after receiving response.

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • ASP.NET MVC ‘Extendable-hooks’ – ControllerActionInvoker class

    - by nmarun
    There’s a class ControllerActionInvoker in ASP.NET MVC. This can be used as one of an hook-points to allow customization of your application. Watching Brad Wilsons’ Advanced MP3 from MVC Conf inspired me to write about this class. What MSDN says: “Represents a class that is responsible for invoking the action methods of a controller.” Well if MSDN says it, I think I can instill a fair amount of confidence into what the class does. But just to get to the details, I also looked into the source code for MVC. Seems like the base class Controller is where an IActionInvoker is initialized: 1: protected virtual IActionInvoker CreateActionInvoker() { 2: return new ControllerActionInvoker(); 3: } In the ControllerActionInvoker (the O-O-B behavior), there are different ‘versions’ of InvokeActionMethod() method that actually call the action method in question and return an instance of type ActionResult. 1: protected virtual ActionResult InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary<string, object> parameters) { 2: object returnValue = actionDescriptor.Execute(controllerContext, parameters); 3: ActionResult result = CreateActionResult(controllerContext, actionDescriptor, returnValue); 4: return result; 5: } I guess that’s enough on the ‘behind-the-screens’ of this class. Let’s see how we can use this class to hook-up extensions. Say I have a requirement that the user should be able to get different renderings of the same output, like html, xml, json, csv and so on. The user will type-in the output format in the url and should the get result accordingly. For example: http://site.com/RenderAs/ – renders the default way (the razor view) http://site.com/RenderAs/xml http://site.com/RenderAs/csv … and so on where RenderAs is my controller. There are many ways of doing this and I’m using a custom ControllerActionInvoker class (even though this might not be the best way to accomplish this). For this, my one and only route in the Global.asax.cs is: 1: routes.MapRoute("RenderAsRoute", "RenderAs/{outputType}", 2: new {controller = "RenderAs", action = "Index", outputType = ""}); Here the controller name is ‘RenderAsController’ and the action that’ll get called (always) is the Index action. The outputType parameter will map to the type of output requested by the user (xml, csv…). I intend to display a list of food items for this example. 1: public class Item 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public Cuisine Cuisine { get; set; } 6: } 7:  8: public class Cuisine 9: { 10: public int CuisineId { get; set; } 11: public string Name { get; set; } 12: } Coming to my ‘RenderAsController’ class. I generate an IList<Item> to represent my model. 1: private static IList<Item> GetItems() 2: { 3: Cuisine cuisine = new Cuisine { CuisineId = 1, Name = "Italian" }; 4: Item item = new Item { Id = 1, Name = "Lasagna", Cuisine = cuisine }; 5: IList<Item> items = new List<Item> { item }; 6: item = new Item {Id = 2, Name = "Pasta", Cuisine = cuisine}; 7: items.Add(item); 8: //... 9: return items; 10: } My action method looks like 1: public IList<Item> Index(string outputType) 2: { 3: return GetItems(); 4: } There are two things that stand out in this action method. The first and the most obvious one being that the return type is not of type ActionResult (or one of its derivatives). Instead I’m passing the type of the model itself (IList<Item> in this case). We’ll convert this to some type of an ActionResult in our custom controller action invoker class later. The second thing (a little subtle) is that I’m not doing anything with the outputType value that is passed on to this action method. This value will be in the RouteData dictionary and we’ll use this in our custom invoker class as well. It’s time to hook up our invoker class. First, I’ll override the Initialize() method of my RenderAsController class. 1: protected override void Initialize(RequestContext requestContext) 2: { 3: base.Initialize(requestContext); 4: string outputType = string.Empty; 5:  6: // read the outputType from the RouteData dictionary 7: if (requestContext.RouteData.Values["outputType"] != null) 8: { 9: outputType = requestContext.RouteData.Values["outputType"].ToString(); 10: } 11:  12: // my custom invoker class 13: ActionInvoker = new ContentRendererActionInvoker(outputType); 14: } Coming to the main part of the discussion – the ContentRendererActionInvoker class: 1: public class ContentRendererActionInvoker : ControllerActionInvoker 2: { 3: private readonly string _outputType; 4:  5: public ContentRendererActionInvoker(string outputType) 6: { 7: _outputType = outputType.ToLower(); 8: } 9: //... 10: } So the outputType value that was read from the RouteData, which was passed in from the url, is being set here in  a private field. Moving to the crux of this article, I now override the CreateActionResult method. 1: protected override ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) 2: { 3: if (actionReturnValue == null) 4: return new EmptyResult(); 5:  6: ActionResult result = actionReturnValue as ActionResult; 7: if (result != null) 8: return result; 9:  10: // This is where the magic happens 11: // Depending on the value in the _outputType field, 12: // return an appropriate ActionResult 13: switch (_outputType) 14: { 15: case "json": 16: { 17: JavaScriptSerializer serializer = new JavaScriptSerializer(); 18: string json = serializer.Serialize(actionReturnValue); 19: return new ContentResult { Content = json, ContentType = "application/json" }; 20: } 21: case "xml": 22: { 23: XmlSerializer serializer = new XmlSerializer(actionReturnValue.GetType()); 24: using (StringWriter writer = new StringWriter()) 25: { 26: serializer.Serialize(writer, actionReturnValue); 27: return new ContentResult { Content = writer.ToString(), ContentType = "text/xml" }; 28: } 29: } 30: case "csv": 31: controllerContext.HttpContext.Response.AddHeader("Content-Disposition", "attachment; filename=items.csv"); 32: return new ContentResult 33: { 34: Content = ToCsv(actionReturnValue as IList<Item>), 35: ContentType = "application/ms-excel" 36: }; 37: case "pdf": 38: string filePath = controllerContext.HttpContext.Server.MapPath("~/items.pdf"); 39: controllerContext.HttpContext.Response.AddHeader("content-disposition", 40: "attachment; filename=items.pdf"); 41: ToPdf(actionReturnValue as IList<Item>, filePath); 42: return new FileContentResult(StreamFile(filePath), "application/pdf"); 43:  44: default: 45: controllerContext.Controller.ViewData.Model = actionReturnValue; 46: return new ViewResult 47: { 48: TempData = controllerContext.Controller.TempData, 49: ViewData = controllerContext.Controller.ViewData 50: }; 51: } 52: } A big method there! The hook I was talking about kinda above actually is here. This is where different kinds / formats of output get returned based on the output type requested in the url. When the _outputType is not set (string.Empty as set in the Global.asax.cs file), the razor view gets rendered (lines 45-50). This is the default behavior in most MVC applications where-in a view (webform/razor) gets rendered on the browser. As you see here, this gets returned as a ViewResult. But then, for an outputType of json/xml/csv, a ContentResult gets returned, while for pdf, a FileContentResult is returned. Here are how the different kinds of output look like: This is how we can leverage this feature of ASP.NET MVC to developer a better application. I’ve used the iTextSharp library to convert to a pdf format. Mike gives quite a bit of detail regarding this library here. You can download the sample code here. (You’ll get an option to download once you open the link). Verdict: Hot chocolate: $3; Reebok shoes: $50; Your first car: $3000; Being able to extend a web application: Priceless.

    Read the article

  • Translating with Google Translate without API and C# Code

    - by Rick Strahl
    Some time back I created a data base driven ASP.NET Resource Provider along with some tools that make it easy to edit ASP.NET resources interactively in a Web application. One of the small helper features of the interactive resource admin tool is the ability to do simple translations using both Google Translate and Babelfish. Here's what this looks like in the resource administration form: When a resource is displayed, the user can click a Translate button and it will show the current resource text and then lets you set the source and target languages to translate. The Go button fires the translation for both Google and Babelfish and displays them - pressing use then changes the language of the resource to the target language and sets the resource value to the newly translated value. It's a nice and quick way to get a quick translation going. Ch… Ch… Changes Originally, both implementations basically did some screen scraping of the interactive Web sites and retrieved translated text out of result HTML. Screen scraping is always kind of an iffy proposition as content can be changed easily, but surprisingly that code worked for many years without fail. Recently however, Google at least changed their input pages to use AJAX callbacks and the page updates no longer worked the same way. End result: The Google translate code was broken. Now, Google does have an official API that you can access, but the API is being deprecated and you actually need to have an API key. Since I have public samples that people can download the API key is an issue if I want people to have the samples work out of the box - the only way I could even do this is by sharing my API key (not allowed).   However, after a bit of spelunking and playing around with the public site however I found that Google's interactive translate page actually makes callbacks using plain public access without an API key. By intercepting some of those AJAX calls and calling them directly from code I was able to get translation back up and working with minimal fuss, by parsing out the JSON these AJAX calls return. I don't think this particular Warning: This is hacky code, but after a fair bit of testing I found this to work very well with all sorts of languages and accented and escaped text etc. as long as you stick to small blocks of translated text. I thought I'd share it in case anybody else had been relying on a screen scraping mechanism like I did and needed a non-API based replacement. Here's the code: /// <summary> /// Translates a string into another language using Google's translate API JSON calls. /// <seealso>Class TranslationServices</seealso> /// </summary> /// <param name="Text">Text to translate. Should be a single word or sentence.</param> /// <param name="FromCulture"> /// Two letter culture (en of en-us, fr of fr-ca, de of de-ch) /// </param> /// <param name="ToCulture"> /// Two letter culture (as for FromCulture) /// </param> public string TranslateGoogle(string text, string fromCulture, string toCulture) { fromCulture = fromCulture.ToLower(); toCulture = toCulture.ToLower(); // normalize the culture in case something like en-us was passed // retrieve only en since Google doesn't support sub-locales string[] tokens = fromCulture.Split('-'); if (tokens.Length > 1) fromCulture = tokens[0]; // normalize ToCulture tokens = toCulture.Split('-'); if (tokens.Length > 1) toCulture = tokens[0]; string url = string.Format(@"http://translate.google.com/translate_a/t?client=j&text={0}&hl=en&sl={1}&tl={2}", HttpUtility.UrlEncode(text),fromCulture,toCulture); // Retrieve Translation with HTTP GET call string html = null; try { WebClient web = new WebClient(); // MUST add a known browser user agent or else response encoding doen't return UTF-8 (WTF Google?) web.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0"); web.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8"); // Make sure we have response encoding to UTF-8 web.Encoding = Encoding.UTF8; html = web.DownloadString(url); } catch (Exception ex) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.ConnectionFailed + ": " + ex.GetBaseException().Message; return null; } // Extract out trans":"...[Extracted]...","from the JSON string string result = Regex.Match(html, "trans\":(\".*?\"),\"", RegexOptions.IgnoreCase).Groups[1].Value; if (string.IsNullOrEmpty(result)) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.InvalidSearchResult; return null; } //return WebUtils.DecodeJsString(result); // Result is a JavaScript string so we need to deserialize it properly JavaScriptSerializer ser = new JavaScriptSerializer(); return ser.Deserialize(result, typeof(string)) as string; } To use the code is straightforward enough - simply provide a string to translate and a pair of two letter source and target languages: string result = service.TranslateGoogle("Life is great and one is spoiled when it goes on and on and on", "en", "de"); TestContext.WriteLine(result); How it works The code to translate is fairly straightforward. It basically uses the URL I snagged from the Google Translate Web Page slightly changed to return a JSON result (&client=j) instead of the funky nested PHP style JSON array that the default returns. The JSON result returned looks like this: {"sentences":[{"trans":"Das Leben ist großartig und man wird verwöhnt, wenn es weiter und weiter und weiter geht","orig":"Life is great and one is spoiled when it goes on and on and on","translit":"","src_translit":""}],"src":"en","server_time":24} I use WebClient to make an HTTP GET call to retrieve the JSON data and strip out part of the full JSON response that contains the actual translated text. Since this is a JSON response I need to deserialize the JSON string in case it's encoded (for upper/lower ASCII chars or quotes etc.). Couple of odd things to note in this code: First note that a valid user agent string must be passed (or at least one starting with a common browser identification - I use Mozilla/5.0). Without this Google doesn't encode the result with UTF-8, but instead uses a ISO encoding that .NET can't easily decode. Google seems to ignore the character set header and use the user agent instead which is - odd to say the least. The other is that the code returns a full JSON response. Rather than use the full response and decode it into a custom type that matches Google's result object, I just strip out the translated text. Yeah I know that's hacky but avoids an extra type and firing up the JavaScript deserializer. My internal version uses a small DecodeJsString() method to decode Javascript without the overhead of a full JSON parser. It's obviously not rocket science but as mentioned above what's nice about it is that it works without an Google API key. I can't vouch on how many translates you can do before there are cut offs but in my limited testing running a few stress tests on a Web server under load I didn't run into any problems. Limitations There are some restrictions with this: It only works on single words or single sentences - multiple sentences (delimited by .) are cut off at the ".". There is also a length limitation which appears to happen at around 220 characters or so. While that may not sound  like much for typical word or phrase translations this this is plenty of length. Use with a grain of salt - Google seems to be trying to limit their exposure to usage of the Translate APIs so this code might break in the future, but for now at least it works. FWIW, I also found that Google's translation is not as good as Babelfish, especially for contextual content like sentences. Google is faster, but Babelfish tends to give better translations. This is why in my translation tool I show both Google and Babelfish values retrieved. You can check out the code for this in the West Wind West Wind Web Toolkit's TranslationService.cs file which contains both the Google and Babelfish translation code pieces. Ironically the Babelfish code has been working forever using screen scraping and continues to work just fine today. I think it's a good idea to have multiple translation providers in case one is down or changes its format, hence the dual display in my translation form above. I hope this has been helpful to some of you - I've actually had many small uses for this code in a number of applications and it's sweet to have a simple routine that performs these operations for me easily. Resources Live Localization Sample Localization Resource Provider Administration form that includes options to translate text using Google and Babelfish interactively. TranslationService.cs The full source code in the West Wind West Wind Web Toolkit's Globalization library that contains the translation code. © Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  HTTP   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

  • Gearman too many processes issue

    - by Roman Newaza
    I use Net_Gearman from PECL, Gearmand 1.1.11 and Gearman Manager. Every time I add background job, I can see new worker listed with no Function, nor Id in Ggearman-Monitor: If I add many messages in the bash loop, after some time it becomes very slow. for i in $(seq 0 9999); do php Client.php && echo $i; done Yesterday, the situation was even worse - I had many error messages in Gearmand log regarding Too many open files and once I added --file-descriptors=49152 as an option and swithched to 1.1.11 from 1.0.6, these errors gone. Here is lsof -p $(cat /var/run/gearman/gearmand.pid) output: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME gearmand 2020 gearman cwd DIR 8,2 4096 2 / gearmand 2020 gearman rtd DIR 8,2 4096 2 / gearmand 2020 gearman txt REG 8,2 3852472 3672962 /opt/sbin/gearmand gearmand 2020 gearman mem REG 8,2 52120 9961752 /lib/x86_64-linux-gnu/libnss_files-2.15.so gearmand 2020 gearman mem REG 8,2 47680 9961756 /lib/x86_64-linux-gnu/libnss_nis-2.15.so gearmand 2020 gearman mem REG 8,2 97248 9961768 /lib/x86_64-linux-gnu/libnsl-2.15.so gearmand 2020 gearman mem REG 8,2 35680 9961750 /lib/x86_64-linux-gnu/libnss_compat-2.15.so gearmand 2020 gearman mem REG 8,2 92720 9964871 /lib/x86_64-linux-gnu/libz.so.1.2.3.4 gearmand 2020 gearman mem REG 8,2 109288 11014600 /usr/lib/x86_64-linux-gnu/libsasl2.so.2.0.25 gearmand 2020 gearman mem REG 8,2 1030512 9961759 /lib/x86_64-linux-gnu/libm-2.15.so gearmand 2020 gearman mem REG 8,2 1930616 9964982 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 gearmand 2020 gearman mem REG 8,2 382896 9964977 /lib/x86_64-linux-gnu/libssl.so.1.0.0 gearmand 2020 gearman mem REG 8,2 1815224 9961748 /lib/x86_64-linux-gnu/libc-2.15.so gearmand 2020 gearman mem REG 8,2 88384 9964865 /lib/x86_64-linux-gnu/libgcc_s.so.1 gearmand 2020 gearman mem REG 8,2 962656 11014043 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.16 gearmand 2020 gearman mem REG 8,2 199600 11016157 /usr/lib/x86_64-linux-gnu/libmemcached.so.11.0.0 gearmand 2020 gearman mem REG 8,2 31752 9961755 /lib/x86_64-linux-gnu/librt-2.15.so gearmand 2020 gearman mem REG 8,2 14768 9961763 /lib/x86_64-linux-gnu/libdl-2.15.so gearmand 2020 gearman mem REG 8,2 414280 9183971 /usr/lib/libboost_program_options.so.1.46.1 gearmand 2020 gearman mem REG 8,2 283832 9183656 /usr/lib/libevent-2.0.so.5.1.4 gearmand 2020 gearman mem REG 8,2 664504 11014432 /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6 gearmand 2020 gearman mem REG 8,2 135366 9961757 /lib/x86_64-linux-gnu/libpthread-2.15.so gearmand 2020 gearman mem REG 8,2 3534240 9175810 /usr/lib/libmysqlclient.so.18.1.0 gearmand 2020 gearman mem REG 8,2 149280 9961760 /lib/x86_64-linux-gnu/ld-2.15.so gearmand 2020 gearman 0u CHR 1,3 0t0 1029 /dev/null gearmand 2020 gearman 1u CHR 1,3 0t0 1029 /dev/null gearmand 2020 gearman 2u CHR 1,3 0t0 1029 /dev/null gearmand 2020 gearman 3w REG 8,2 9381897 3409366 /var/log/gearman-job-server/gearman.log gearmand 2020 gearman 4r FIFO 0,8 0t0 38869143 pipe gearmand 2020 gearman 5w FIFO 0,8 0t0 38869143 pipe gearmand 2020 gearman 6u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 7u unix 0xffff880230fdf500 0t0 38869144 socket gearmand 2020 gearman 8u unix 0xffff880230fdde40 0t0 38869145 socket gearmand 2020 gearman 9u IPv4 38869146 0t0 TCP localhost:4730 (LISTEN) gearmand 2020 gearman 10r FIFO 0,8 0t0 38869147 pipe gearmand 2020 gearman 11w FIFO 0,8 0t0 38869147 pipe gearmand 2020 gearman 12u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 13u unix 0xffff880230fde4c0 0t0 38869148 socket gearmand 2020 gearman 14u unix 0xffff880230fdeb40 0t0 38869149 socket gearmand 2020 gearman 15r FIFO 0,8 0t0 38869150 pipe gearmand 2020 gearman 16w FIFO 0,8 0t0 38869150 pipe gearmand 2020 gearman 17u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 18u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 19u unix 0xffff880230fdb400 0t0 38869151 socket gearmand 2020 gearman 20u unix 0xffff880230fdaa40 0t0 38869152 socket gearmand 2020 gearman 21r FIFO 0,8 0t0 38869153 pipe gearmand 2020 gearman 22w FIFO 0,8 0t0 38869153 pipe gearmand 2020 gearman 23u unix 0xffff880203cfce00 0t0 38868290 socket gearmand 2020 gearman 24u unix 0xffff880203cfdb00 0t0 38868291 socket gearmand 2020 gearman 25r FIFO 0,8 0t0 38868292 pipe gearmand 2020 gearman 26w FIFO 0,8 0t0 38868292 pipe gearmand 2020 gearman 27u 0000 0,9 0 6826 anon_inode gearmand 2020 gearman 28u unix 0xffff880203cf9040 0t0 38868293 socket gearmand 2020 gearman 29u unix 0xffff880203cfaa40 0t0 38868294 socket gearmand 2020 gearman 30r FIFO 0,8 0t0 38868295 pipe gearmand 2020 gearman 31w FIFO 0,8 0t0 38868295 pipe gearmand 2020 gearman 32u IPv4 38868324 0t0 TCP localhost:4730->localhost:57954 (ESTABLISHED) gearmand 2020 gearman 33u IPv4 38868325 0t0 TCP localhost:4730->localhost:57955 (ESTABLISHED) gearmand 2020 gearman 34u IPv4 38901247 0t0 TCP localhost:4730->localhost:38594 (ESTABLISHED) gearmand 2020 gearman 35u IPv4 38868327 0t0 TCP localhost:4730->localhost:57957 (ESTABLISHED) gearmand 2020 gearman 36u IPv4 38867483 0t0 TCP localhost:4730->localhost:57959 (ESTABLISHED) gearmand 2020 gearman 37u IPv4 38867484 0t0 TCP localhost:4730->localhost:57958 (ESTABLISHED) gearmand 2020 gearman 38u IPv4 38901248 0t0 TCP localhost:4730->localhost:38595 (CLOSE_WAIT) gearmand 2020 gearman 39u IPv4 38901249 0t0 TCP localhost:4730->localhost:38597 (ESTABLISHED) gearmand 2020 gearman 40u IPv4 38869201 0t0 TCP localhost:4730->localhost:57979 (ESTABLISHED) gearmand 2020 gearman 41u IPv4 38900437 0t0 TCP localhost:4730->localhost:38599 (ESTABLISHED) gearmand 2020 gearman 42u IPv4 38900438 0t0 TCP localhost:4730->localhost:38602 (ESTABLISHED) gearmand 2020 gearman 43u IPv4 38868375 0t0 TCP localhost:4730->localhost:57987 (ESTABLISHED) gearmand 2020 gearman 44u IPv4 38900468 0t0 TCP localhost:4730->localhost:38606 (CLOSE_WAIT) gearmand 2020 gearman 45u IPv4 38868381 0t0 TCP localhost:4730->localhost:57999 (ESTABLISHED) gearmand 2020 gearman 46u IPv4 38868388 0t0 TCP localhost:4730->localhost:58007 (ESTABLISHED) gearmand 2020 gearman 47u IPv4 38868393 0t0 TCP localhost:4730->localhost:58011 (ESTABLISHED) gearmand 2020 gearman 48u IPv4 38903950 0t0 TCP localhost:4730->localhost:38609 (ESTABLISHED) gearmand 2020 gearman 49u IPv4 38870276 0t0 TCP localhost:4730->localhost:58019 (ESTABLISHED) gearmand 2020 gearman 50u IPv4 38903955 0t0 TCP localhost:4730->localhost:38613 (ESTABLISHED) gearmand 2020 gearman 51u IPv4 38900477 0t0 TCP localhost:4730->localhost:38617 (CLOSE_WAIT) gearmand 2020 gearman 52u IPv4 38867630 0t0 TCP localhost:4730->localhost:58031 (ESTABLISHED) gearmand 2020 gearman 53u IPv4 38867633 0t0 TCP localhost:4730->localhost:58035 (ESTABLISHED) gearmand 2020 gearman 54u IPv4 38867636 0t0 TCP localhost:4730->localhost:58039 (ESTABLISHED) gearmand 2020 gearman 55u IPv4 38900536 0t0 TCP localhost:4730->localhost:38619 (ESTABLISHED) gearmand 2020 gearman 56u IPv4 38868419 0t0 TCP localhost:4730->localhost:58047 (ESTABLISHED) gearmand 2020 gearman 57u IPv4 38869263 0t0 TCP localhost:4730->localhost:58051 (ESTABLISHED) gearmand 2020 gearman 58u IPv4 38900537 0t0 TCP localhost:4730->localhost:38621 (ESTABLISHED) gearmand 2020 gearman 59u IPv4 38869271 0t0 TCP localhost:4730->localhost:58059 (ESTABLISHED) gearmand 2020 gearman 60u IPv4 38900538 0t0 TCP localhost:4730->localhost:38623 (ESTABLISHED) gearmand 2020 gearman 61u IPv4 38870319 0t0 TCP localhost:4730->localhost:58067 (ESTABLISHED) gearmand 2020 gearman 62u IPv4 38900540 0t0 TCP localhost:4730->localhost:38628 (ESTABLISHED) gearmand 2020 gearman 63u IPv4 38869289 0t0 TCP localhost:4730->localhost:58075 (ESTABLISHED) ... gearmand 2020 gearman 2229u IPv4 38903885 0t0 TCP localhost:4730->localhost:38572 (ESTABLISHED) gearmand 2020 gearman 2230u IPv4 38901211 0t0 TCP localhost:4730->localhost:38576 (ESTABLISHED) gearmand 2020 gearman 2234u IPv4 38901237 0t0 TCP localhost:4730->localhost:38588 (ESTABLISHED)

    Read the article

  • Creating a JSONP Formatter for ASP.NET Web API

    - by Rick Strahl
    Out of the box ASP.NET WebAPI does not include a JSONP formatter, but it's actually very easy to create a custom formatter that implements this functionality. JSONP is one way to allow Browser based JavaScript client applications to bypass cross-site scripting limitations and serve data from the non-current Web server. AJAX in Web Applications uses the XmlHttp object which by default doesn't allow access to remote domains. There are number of ways around this limitation <script> tag loading and JSONP is one of the easiest and semi-official ways that you can do this. JSONP works by combining JSON data and wrapping it into a function call that is executed when the JSONP data is returned. If you use a tool like jQUery it's extremely easy to access JSONP content. Imagine that you have a URL like this: http://RemoteDomain/aspnetWebApi/albums which on an HTTP GET serves some data - in this case an array of record albums. This URL is always directly accessible from an AJAX request if the URL is on the same domain as the parent request. However, if that URL lives on a separate server it won't be easily accessible to an AJAX request. Now, if  the server can serve up JSONP this data can be accessed cross domain from a browser client. Using jQuery it's really easy to retrieve the same data with JSONP:function getAlbums() { $.getJSON("http://remotedomain/aspnetWebApi/albums?callback=?",null, function (albums) { alert(albums.length); }); } The resulting callback the same as if the call was to a local server when the data is returned. jQuery deserializes the data and feeds it into the method. Here the array is received and I simply echo back the number of items returned. From here your app is ready to use the data as needed. This all works fine - as long as the server can serve the data with JSONP. What does JSONP look like? JSONP is a pretty simple 'protocol'. All it does is wrap a JSON response with a JavaScript function call. The above result from the JSONP call looks like this:Query17103401925975181569_1333408916499( [{"Id":"34043957","AlbumName":"Dirty Deeds Done Dirt Cheap",…},{…}] ) The way JSONP works is that the client (jQuery in this case) sends of the request, receives the response and evals it. The eval basically executes the function and deserializes the JSON inside of the function. It's actually a little more complex for the framework that does this, but that's the gist of what happens. JSONP works by executing the code that gets returned from the JSONP call. JSONP and ASP.NET Web API As mentioned previously, JSONP support is not natively in the box with ASP.NET Web API. But it's pretty easy to create and plug-in a custom formatter that provides this functionality. The following code is based on Christian Weyers example but has been updated to the latest Web API CodePlex bits, which changes the implementation a bit due to the way dependent objects are exposed differently in the latest builds. Here's the code:  using System; using System.IO; using System.Net; using System.Net.Http.Formatting; using System.Net.Http.Headers; using System.Threading.Tasks; using System.Web; using System.Net.Http; namespace Westwind.Web.WebApi { /// <summary> /// Handles JsonP requests when requests are fired with /// text/javascript or application/json and contain /// a callback= (configurable) query string parameter /// /// Based on Christian Weyers implementation /// https://github.com/thinktecture/Thinktecture.Web.Http/blob/master/Thinktecture.Web.Http/Formatters/JsonpFormatter.cs /// </summary> public class JsonpFormatter : JsonMediaTypeFormatter { public JsonpFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); //MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", "application/json")); JsonpParameterName = "callback"; } /// <summary> /// Name of the query string parameter to look for /// the jsonp function name /// </summary> public string JsonpParameterName {get; set; } /// <summary> /// Captured name of the Jsonp function that the JSON call /// is wrapped in. Set in GetPerRequestFormatter Instance /// </summary> private string JsonpCallbackFunction; public override bool CanWriteType(Type type) { return true; } /// <summary> /// Override this method to capture the Request object /// and look for the query string parameter and /// create a new instance of this formatter. /// /// This is the only place in a formatter where the /// Request object is available. /// </summary> /// <param name="type"></param> /// <param name="request"></param> /// <param name="mediaType"></param> /// <returns></returns> public override MediaTypeFormatter GetPerRequestFormatterInstance(Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonpFormatter() { JsonpCallbackFunction = GetJsonCallbackFunction(request) }; return formatter; } /// <summary> /// Override to wrap existing JSON result with the /// JSONP function call /// </summary> /// <param name="type"></param> /// <param name="value"></param> /// <param name="stream"></param> /// <param name="contentHeaders"></param> /// <param name="transportContext"></param> /// <returns></returns> public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { if (!string.IsNullOrEmpty(JsonpCallbackFunction)) { return Task.Factory.StartNew(() => { var writer = new StreamWriter(stream); writer.Write( JsonpCallbackFunction + "("); writer.Flush(); base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait(); writer.Write(")"); writer.Flush(); }); } else { return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext); } } /// <summary> /// Retrieves the Jsonp Callback function /// from the query string /// </summary> /// <returns></returns> private string GetJsonCallbackFunction(HttpRequestMessage request) { if (request.Method != HttpMethod.Get) return null; var query = HttpUtility.ParseQueryString(request.RequestUri.Query); var queryVal = query[this.JsonpParameterName]; if (string.IsNullOrEmpty(queryVal)) return null; return queryVal; } } } Note again that this code will not work with the Beta bits of Web API - it works only with post beta bits from CodePlex and hopefully this will continue to work until RTM :-) This code is a bit different from Christians original code as the API has changed. The biggest change is that the Read/Write functions no longer receive a global context object that gives access to the Request and Response objects as the older bits did. Instead you now have to override the GetPerRequestFormatterInstance() method, which receives the Request as a parameter. You can capture the Request there, or use the request to pick up the values you need and store them on the formatter. Note that I also have to create a new instance of the formatter since I'm storing request specific state on the instance (information whether the callback= querystring is present) so I return a new instance of this formatter. Other than that the code should be straight forward: The code basically writes out the function pre- and post-amble and the defers to the base stream to retrieve the JSON to wrap the function call into. The code uses the Async APIs to write this data out (this will take some getting used to seeing all over the place for me). Hooking up the JsonpFormatter Once you've created a formatter, it has to be added to the request processing sequence by adding it to the formatter collection. Web API is configured via the static GlobalConfiguration object.  protected void Application_Start(object sender, EventArgs e) { // Verb Routing RouteTable.Routes.MapHttpRoute( name: "AlbumsVerbs", routeTemplate: "albums/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi" } ); GlobalConfiguration .Configuration .Formatters .Insert(0, new Westwind.Web.WebApi.JsonpFormatter()); }   That's all it takes. Note that I added the formatter at the top of the list of formatters, rather than adding it to the end which is required. The JSONP formatter needs to fire before any other JSON formatter since it relies on the JSON formatter to encode the actual JSON data. If you reverse the order the JSONP output never shows up. So, in general when adding new formatters also try to be aware of the order of the formatters as they are added. Resources JsonpFormatter Code on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Introducción a ENUM (E.164 Number Mapping)

    - by raul.goycoolea
    E.164 Number Mapping (ENUM o Enum) se diseñó para resolver la cuestión de como se pueden encontrar servicios de internet mediante un número telefónico, es decir cómo se pueden usar los los teléfonos, que solamente tienen 12 teclas, para acceder a servicios de Internet. La parte más básica de ENUM es por tanto la convergencia de las redes del STDP y la IP; ENUM hace que pueda haber una correspondencia entre un número telefónico y un identificador de Internet. En síntesis, Enum es un conjunto de protocolos para convertir números E.164 en URIs, y viceversa, de modo que el sistema de numeración E.164 tenga una función de correspondencia con las direcciones URI en Internet. Esta función es necesaria porque un número telefónico no tiene sentido en el mundo IP, ni una dirección IP tiene sentido en las redes telefónicas. Así, mediante esta técnica, las comunicaciones cuyo destino se marque con un número E.164, puedan terminar en el identificador correcto (número E.164 si termina en el STDP, o URI si termina en redes IP). La solución técnica de mirar en una base de datos cual es el identificador de destino tiene consecuencias muy interesantes, como que la llamada se pueda terminar donde desee el abonado llamado. Esta es una de las características que ofrece ENUM : el destino concreto, el terminal o terminales de terminación, no lo decide quien inicia la llamada o envía el mensaje sino la persona que es llamada o recibe el mensaje, que ha escrito sus preferencias en una base de datos. En otras palabras, el destinatario de la llamada decide cómo quiere ser contactado, tanto si lo que se le comunica es un email, o un sms, o telefax, o una llamada de voz. Cuando alguien quiera llamarle a usted, lo que tiene que hacer el llamante es seleccionar su nombre (el del llamado) en la libreta de direcciones del terminal o marcar su número ENUM. Una aplicación informática obtendrá de una base de datos los datos de contacto y disponibilidad que usted decidió. Y el mensaje le será remitido tal como usted especificó en dicha base de datos. Esto es algo nuevo que permite que usted, como persona llamada, defina sus preferencias de terminación para cualquier tipo de contenido. Por ejemplo, usted puede querer que todos los emails le sean enviados como sms o que los mensajes de voz se le remitan como emails; las comunicaciones ya no dependen de donde esté usted o deque tipo de terminal utiliza (teléfono, pda, internet). Además, con ENUM usted puede gestionar la portabilidad de sus números fijos y móviles. ENUM emplea una técnica de búsqueda indirecta en una base de datos que tiene los registros NAPTR ("Naming Authority Pointer Resource Records" tal como lo define el RFC 2915), y que utiliza el número telefónico Enum como clave de búsqueda, para obtener qué URIs corresponden a cada número telefónico. La base de datos que almacena estos registros es del tipo DNS.Si bien en uno de sus diversos usos sirve para facilitar las llamadas de usuarios de VoIP entre redes tradicionales del STDP y redes IP, debe tenerse en cuenta que ENUM no es una función de VoIP sino que es un mecanismo de conversión entre números/identificadores. Por tanto no debe ser confundido con el uso normal de enrutar las llamadas de VoIP mediante los protocolos SIP y H.323. ENUM puede ser muy útil para aquellas organizaciones que quieran tener normalizada la manera en que las aplicaciones acceden a los datos de comunicación de cada usuario. FundamentosPara que la convergencia entre el Sistema Telefónico Disponible al Público (STDP) y la Telefonía por Internet o Voz sobre IP (VoIP) y que el desarrollo de nuevos servicios multimedia tengan menos obstáculos, es fundamental que los usuarios puedan realizar sus llamadas tal como están acostumbrados a hacerlo, marcando números. Para eso, es preciso que haya un sistema universal de correspondencia de número a direcciones IP (y viceversa) y que las diferentes redes se puedan interconectar. Hay varias fórmulas que permiten que un número telefónico sirva para establecer comunicación con múltiples servicios. Una de estas fórmulas es el Electronic Number Mapping System ENUM, normalizado por el grupo de tareas especiales de ingeniería en Internet (IETF, Internet engineering task force), del que trata este artículo, que emplea la numeración E.164, los protocolos y la infraestructura telefónica para acceder indirectamente a diferentes servicios. Por tanto, se accede a un servicio mediante un identificador numérico universal: un número telefónico tradicional. ENUM permite comunicar las direcciones del mundo IP con las del mundo telefónico, y viceversa, sin problemas. Antes de entrar en mayores profundidades, conviene dar una breve pincelada para aclarar cómo se organiza la correspondencia entre números o URI. Para ello imaginemos una llamada que se inicia desde el servicio telefónico tradicional con destino a un número Enum. En ENUM Público, el abonado o usuario Enum a quien va destinada lallamada, habrá decidido incluir en la base de datos Enum uno o varios URI o números E.164, que forman una lista con sus preferencias para terminar la llamada. Y el sistema como se explica más adelante, elegirá cual es el número o URI adecuado para dicha terminación. Por tanto como resultado de la consulta a la base dedatos Enum siempre se da una relación unívoca entre el número Enum marcado y el de terminación, conforme a los deseos de la persona llamada.Variedades de ENUMUna posible fuente de confusión cuando se trata sobre ENUM es la variedad de soluciones o sistemas que emplean este calificativo. Lo habitual es que cuando se haga una referencia a ENUM se trate de uno de los siguientes casos: ENUM Público: Es la visión original de ENUM, como base de datos pública, parecida a un directorio, donde el abonado "opta" a ser incluido en la base de datos, que está gestionada en el dominio e164.arpa, delegando a cada país la gestión de la base de datos y la numeración. También se conoce como ENUM de usuario. Carrier ENUM, o ENUM Infraestructura, o de Operador: Cuando grupos de operadores proveedores de servicios de comunicaciones electrónicas acuerdan compartir la información de los abonados por medio de ENUM mediante acuerdos privados. En este caso son los operadores quienes controlan la información del abonado en vez de hacerlo (optar) los propios abonados. Carrier ENUM o ENUM de Operador también se conoce como Infrastructure ENUM o ENUM Infraestructura, y está siendo normalizado por IETF para la interconexión de VoIP (mediante acuerdos de peering). Como se explicará en la correspondiente sección, también se puede utilizar para la portabilidad o conservación de número. ENUM Privado: Un operador de telefonía o de VoIP, o un ISP, o un gran usuario, puede utilizar las técnicas de ENUM en sus redes y en las de sus clientes sin emplear DNS públicos, con DNS privados o internos. Resulta fácil imaginar como puede utilizarse esta técnica para que compañías multinacionales, o bancos, o agencias de viajes, tengan planes de numeración muy coherentes y eficaces. Cómo funciona ENUMPara conocer cómo funciona Enum, le remitimos a la página correspondiente a ENUM Público, puesto que esa variedad de Enum es la típica, la que dió lugar a todos los procedimientos y normas de IETF .Más detalles sobre: @page { margin: 0.79in } P { margin-bottom: 0.08in } H4 { margin-bottom: 0.08in } H4.ctl { font-family: "Lohit Hindi" } A:link { so-language: zxx } -- ENUM Público. En esta página se explica con cierto detalle como funciona Enum Carrier ENUM o ENUM de Operador ENUM Privado Normas técnicas: RFC 2915: NAPTR RR. The Naming Authority Pointer (NAPTR) DNS Resource Record RFC 3761: ENUM Protocol. The E.164 to Uniform Resource Identifiers (URI) Dynamic Delegation Discovery System (DDDS) Application (ENUM). (obsoletes RFC 2916). RFC 3762: Usage of H323 addresses in ENUM Protocol RFC 3764: Usage of SIP addresses in ENUM Protocol RFC 3824: Using E.164 numbers with SIP RFC 4769: IANA Registration for an Enumservice Containing Public Switched Telephone Network (PSTN) Signaling Information RFC 3026: Berlin Liaison Statement RFC 3953: Telephone Number Mapping (ENUM) Service Registration for Presence Services RFC 2870: Root Name Server Operational Requirements RFC 3482: Number Portability in the Global Switched Telephone Network (GSTN): An Overview RFC 2168: Resolution of Uniform Resource Identifiers using the Domain Name System Organizaciones relacionadas con ENUM RIPE - Adimistrador del nivel 0 de ENUM e164.arpa. ITU-T TSB - Unión Internacional de Telecomunicaciones ETSI - European Telecommunications Standards Institute VisionNG - Administrador del rango ENUM 878-10 IETF ENUM Chapter

    Read the article

  • Using jQuery Live instead of jQuery Hover function

    - by hajan
    Let’s say we have a case where we need to create mouseover / mouseout functionality for a list which will be dynamically filled with data on client-side. We can use jQuery hover function, which handles the mouseover and mouseout events with two functions. See the following example: <!DOCTYPE html> <html lang="en"> <head id="Head1" runat="server">     <title>jQuery Mouseover / Mouseout Demo</title>     <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.4.4.js"></script>     <style type="text/css">         .hover { color:Red; cursor:pointer;}     </style>     <script type="text/javascript">         $(function () {             $("li").hover(               function () {                   $(this).addClass("hover");               },               function () {                   $(this).removeClass("hover");               });         });     </script> </head> <body>     <form id="form2" runat="server">     <ul>         <li>Data 1</li>         <li>Data 2</li>         <li>Data 3</li>         <li>Data 4</li>         <li>Data 5</li>         <li>Data 6</li>     </ul>     </form> </body> </html> Now, if you have situation where you want to add new data dynamically... Lets say you have a button to add new item in the list. Add the following code right bellow the </ul> tag <input type="text" id="txtItem" /> <input type="button" id="addNewItem" value="Add New Item" /> And add the following button click functionality: //button add new item functionality $("#addNewItem").click(function (event) {     event.preventDefault();     $("<li>" + $("#txtItem").val() + "</li>").appendTo("ul"); }); The mouse over effect won't work for the newly added items. Therefore, we need to use live or delegate function. These both do the same job. The main difference is that for some cases delegate is considered a bit faster, and can be used in chaining. In our case, we can use both. I will use live function. $("li").live("mouseover mouseout",   function (event) {       if (event.type == "mouseover") $(this).addClass("hover");       else $(this).removeClass("hover");   }); The complete code is: <!DOCTYPE html> <html lang="en"> <head id="Head1" runat="server">     <title>jQuery Mouseover / Mouseout Demo</title>     <script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.4.4.js"></script>     <style type="text/css">         .hover { color:Red; cursor:pointer;}     </style>     <script type="text/javascript">         $(function () {             $("li").live("mouseover mouseout",               function (event) {                   if (event.type == "mouseover") $(this).addClass("hover");                   else $(this).removeClass("hover");               });             //button add new item functionality             $("#addNewItem").click(function (event) {                 event.preventDefault();                 $("<li>" + $("#txtItem").val() + "</li>").appendTo("ul");             });         });     </script> </head> <body>     <form id="form2" runat="server">     <ul>         <li>Data 1</li>         <li>Data 2</li>         <li>Data 3</li>         <li>Data 4</li>         <li>Data 5</li>         <li>Data 6</li>     </ul>          <input type="text" id="txtItem" />     <input type="button" id="addNewItem" value="Add New Item" />     </form> </body> </html> So, basically when replacing hover with live, you see we use the mouseover and mouseout names for both events. Check the working demo which is available HERE. Hope this was useful blog for you. Hope it’s helpful. HajanReference blog: http://codeasp.net/blogs/hajan/microsoft-net/1260/using-jquery-live-instead-of-jquery-hover-function

    Read the article

  • Adding and accessing custom sections in your C# App.config

    - by deadlydog
    So I recently thought I’d try using the app.config file to specify some data for my application (such as URLs) rather than hard-coding it into my app, which would require a recompile and redeploy of my app if one of our URLs changed.  By using the app.config it allows a user to just open up the .config file that sits beside their .exe file and edit the URLs right there and then re-run the app; no recompiling, no redeployment necessary. I spent a good few hours fighting with the app.config and looking at examples on Google before I was able to get things to work properly.  Most of the examples I found showed you how to pull a value from the app.config if you knew the specific key of the element you wanted to retrieve, but it took me a while to find a way to simply loop through all elements in a section, so I thought I would share my solutions here.   Simple and Easy The easiest way to use the app.config is to use the built-in types, such as NameValueSectionHandler.  For example, if we just wanted to add a list of database server urls to use in my app, we could do this in the app.config file like so: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDatabaseServers" type="System.Configuration.NameValueSectionHandler" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDatabaseServers> 10: <add key="localhost" value="localhost" /> 11: <add key="Dev" value="Dev.MyDomain.local" /> 12: <add key="Test" value="Test.MyDomain.local" /> 13: <add key="Live" value="Prod.MyDomain.com" /> 14: </ConnectionManagerDatabaseServers> 15: </configuration>   And then you can access these values in code like so: 1: string devUrl = string.Empty; 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: devUrl = connectionManagerDatabaseServers["Dev"].ToString(); 6: }   Sometimes though you don’t know what the keys are going to be and you just want to grab all of the values in that ConnectionManagerDatabaseServers section.  In that case you can get them all like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: foreach (var serverKey in connectionManagerDatabaseServers.AllKeys) 6: { 7: string serverValue = connectionManagerDatabaseServers.GetValues(serverKey).FirstOrDefault(); 8: AddDatabaseServer(serverValue); 9: } 10: }   And here we just assume that the AddDatabaseServer() function adds the given string to some list of strings.  So this works great, but what about when we want to bring in more values than just a single string (or technically you could use this to bring in 2 strings, where the “key” could be the other string you want to store; for example, we could have stored the value of the Key as the user-friendly name of the url).   More Advanced (and more complicated) So if you want to bring in more information than a string or two per object in the section, then you can no longer simply use the built-in System.Configuration.NameValueSectionHandler type provided for us.  Instead you have to build your own types.  Here let’s assume that we again want to configure a set of addresses (i.e. urls), but we want to specify some extra info with them, such as the user-friendly name, if they require SSL or not, and a list of security groups that are allowed to save changes made to these endpoints. So let’s start by looking at the app.config: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDataSection" type="ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection, ConnectionManagerUpdater" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDataSection> 10: <ConnectionManagerEndpoints> 11: <add name="Development" address="Dev.MyDomain.local" useSSL="false" /> 12: <add name="Test" address="Test.MyDomain.local" useSSL="true" /> 13: <add name="Live" address="Prod.MyDomain.com" useSSL="true" securityGroupsAllowedToSaveChanges="ConnectionManagerUsers" /> 14: </ConnectionManagerEndpoints> 15: </ConnectionManagerDataSection> 16: </configuration>   The first thing to notice here is that my section is now using the type “ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection” (the fully qualified path to my new class I created) “, ConnectionManagerUpdater” (the name of the assembly my new class is in).  Next, you will also notice an extra layer down in the <ConnectionManagerDataSection> which is the <ConnectionManagerEndpoints> element.  This is a new collection class that I created to hold each of the Endpoint entries that are defined.  Let’s look at that code now: 1: using System; 2: using System.Collections.Generic; 3: using System.Configuration; 4: using System.Linq; 5: using System.Text; 6: using System.Threading.Tasks; 7:  8: namespace ConnectionManagerUpdater.Data.Configuration 9: { 10: public class ConnectionManagerDataSection : ConfigurationSection 11: { 12: /// <summary> 13: /// The name of this section in the app.config. 14: /// </summary> 15: public const string SectionName = "ConnectionManagerDataSection"; 16: 17: private const string EndpointCollectionName = "ConnectionManagerEndpoints"; 18:  19: [ConfigurationProperty(EndpointCollectionName)] 20: [ConfigurationCollection(typeof(ConnectionManagerEndpointsCollection), AddItemName = "add")] 21: public ConnectionManagerEndpointsCollection ConnectionManagerEndpoints { get { return (ConnectionManagerEndpointsCollection)base[EndpointCollectionName]; } } 22: } 23:  24: public class ConnectionManagerEndpointsCollection : ConfigurationElementCollection 25: { 26: protected override ConfigurationElement CreateNewElement() 27: { 28: return new ConnectionManagerEndpointElement(); 29: } 30: 31: protected override object GetElementKey(ConfigurationElement element) 32: { 33: return ((ConnectionManagerEndpointElement)element).Name; 34: } 35: } 36: 37: public class ConnectionManagerEndpointElement : ConfigurationElement 38: { 39: [ConfigurationProperty("name", IsRequired = true)] 40: public string Name 41: { 42: get { return (string)this["name"]; } 43: set { this["name"] = value; } 44: } 45: 46: [ConfigurationProperty("address", IsRequired = true)] 47: public string Address 48: { 49: get { return (string)this["address"]; } 50: set { this["address"] = value; } 51: } 52: 53: [ConfigurationProperty("useSSL", IsRequired = false, DefaultValue = false)] 54: public bool UseSSL 55: { 56: get { return (bool)this["useSSL"]; } 57: set { this["useSSL"] = value; } 58: } 59: 60: [ConfigurationProperty("securityGroupsAllowedToSaveChanges", IsRequired = false)] 61: public string SecurityGroupsAllowedToSaveChanges 62: { 63: get { return (string)this["securityGroupsAllowedToSaveChanges"]; } 64: set { this["securityGroupsAllowedToSaveChanges"] = value; } 65: } 66: } 67: }   So here the first class we declare is the one that appears in the <configSections> element of the app.config.  It is ConnectionManagerDataSection and it inherits from the necessary System.Configuration.ConfigurationSection class.  This class just has one property (other than the expected section name), that basically just says I have a Collection property, which is actually a ConnectionManagerEndpointsCollection, which is the next class defined.  The ConnectionManagerEndpointsCollection class inherits from ConfigurationElementCollection and overrides the requied fields.  The first tells it what type of Element to create when adding a new one (in our case a ConnectionManagerEndpointElement), and a function specifying what property on our ConnectionManagerEndpointElement class is the unique key, which I’ve specified to be the Name field. The last class defined is the actual meat of our elements.  It inherits from ConfigurationElement and specifies the properties of the element (which can then be set in the xml of the App.config).  The “ConfigurationProperty” attribute on each of the properties tells what we expect the name of the property to correspond to in each element in the app.config, as well as some additional information such as if that property is required and what it’s default value should be. Finally, the code to actually access these values would look like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDataSection = ConfigurationManager.GetSection(ConnectionManagerDataSection.SectionName) as ConnectionManagerDataSection; 3: if (connectionManagerDataSection != null) 4: { 5: foreach (ConnectionManagerEndpointElement endpointElement in connectionManagerDataSection.ConnectionManagerEndpoints) 6: { 7: var endpoint = new ConnectionManagerEndpoint() { Name = endpointElement.Name, ServerInfo = new ConnectionManagerServerInfo() { Address = endpointElement.Address, UseSSL = endpointElement.UseSSL, SecurityGroupsAllowedToSaveChanges = endpointElement.SecurityGroupsAllowedToSaveChanges.Split(',').Where(e => !string.IsNullOrWhiteSpace(e)).ToList() } }; 8: AddEndpoint(endpoint); 9: } 10: } This looks very similar to what we had before in the “simple” example.  The main points of interest are that we cast the section as ConnectionManagerDataSection (which is the class we defined for our section) and then iterate over the endpoints collection using the ConnectionManagerEndpoints property we created in the ConnectionManagerDataSection class.   Also, some other helpful resources around using app.config that I found (and for parts that I didn’t really explain in this article) are: How do you use sections in C# 4.0 app.config? (Stack Overflow) <== Shows how to use Section Groups as well, which is something that I did not cover here, but might be of interest to you. How to: Create Custom Configuration Sections Using Configuration Section (MSDN) ConfigurationSection Class (MSDN) ConfigurationCollectionAttribute Class (MSDN) ConfigurationElementCollection Class (MSDN)   I hope you find this helpful.  Feel free to leave a comment.  Happy Coding!

    Read the article

  • My sound stopped working today, how can I fix it?

    - by Oli
    This seems to be a problem with pulseaudio. I was logged in over VNC on my phone and started playing a video this caused X to crash (as sometimes happens). I restarted and suddenly the sound doesn't work. I have a Intel HDA/Realtek ALC889 00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller alsamixer is detecting this just fine. PulseAudio doesn't detect this alsa device so is using auto_null as the default sink (logs below). When I properly kill PulseAudio (tell it not to auto-start) direct ALSA communication with the sound card works just fine. speaker-test, for example, works. So the hardware and ALSA layers are fine IMO. In the logs, it seems that the card might be "busy" but I really don't know how or why it would be now (and never before). Is there an ALSA lock file somewhere that it still there because of my crash? I just ran sudo fuser /dev/snd/* and saw this: oli@bert:~$ sudo fuser /dev/snd/* /dev/snd/controlC0: 1884 /dev/snd/pcmC0D0c: 1884m /dev/snd/timer: 1884 A look at the process list (ps aux | grep 1884) tells me process 1884 is arecord -c 1 -f S16_LE -r 8000 -t raw. No idea what this is or why it's running. When I try and kill arecord (as root), it just respawns and rebinds on the hardware. I'm in a very annoying situation where I don't know what is going on and don't know how to find out. I'm open to all suggestions to get this working again. Fire away. And here's what I get when I stop PA auto-loading, kill it and then start it with -vvvv. oli@bert:~$ pulseaudio -vvvvv I: main.c: setrlimit(RLIMIT_NICE, (31, 31)) failed: Operation not permitted I: main.c: setrlimit(RLIMIT_RTPRIO, (9, 9)) failed: Operation not permitted D: core-rtclock.c: Timer slack is set to 50 us. D: core-util.c: RealtimeKit worked. I: core-util.c: Successfully gained nice level -11. I: main.c: This is PulseAudio 0.9.21-63-gd3efa-dirty D: main.c: Compilation host: x86_64-pc-linux-gnu D: main.c: Compilation CFLAGS: -g -O2 -g -Wall -O3 -Wall -W -Wextra -pipe -Wno-long-long -Winline -Wvla -Wno-overlength-strings -Wunsafe-loop-optimizations -Wundef -Wformat=2 -Wlogical-op -Wsign-compare -Wformat-security -Wmissing-include-dirs -Wformat-nonliteral -Wold-style-definition -Wpointer-arith -Winit-self -Wdeclaration-after-statement -Wfloat-equal -Wmissing-prototypes -Wstrict-prototypes -Wredundant-decls -Wmissing-declarations -Wmissing-noreturn -Wshadow -Wendif-labels -Wcast-align -Wstrict-aliasing=2 -Wwrite-strings -Wno-unused-parameter -ffast-math -Wp,-D_FORTIFY_SOURCE=2 -fno-common -fdiagnostics-show-option D: main.c: Running on host: Linux x86_64 2.6.38-rc3 #1 SMP Tue Feb 1 10:53:04 GMT 2011 D: main.c: Found 8 CPUs. I: main.c: Page size is 4096 bytes D: main.c: Compiled with Valgrind support: no D: main.c: Running in valgrind mode: no D: main.c: Running in VM: no D: main.c: Optimised build: yes D: main.c: All asserts enabled. I: main.c: Machine ID is 8310740c4729ef474fe5ecec4bbf5a6b. I: main.c: Session ID is 8310740c4729ef474fe5ecec4bbf5a6b-1297338553.571075-1050119523. I: main.c: Using runtime directory /home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-runtime. I: main.c: Using state directory /home/oli/.pulse. I: main.c: Using modules directory /usr/lib/pulse-0.9.21/modules. I: main.c: Running in system mode: no I: main.c: Fresh high-resolution timers available! Enjoy ol' chap! I: cpu-x86.c: CPU flags: CMOV MMX SSE SSE2 SSE3 SSSE3 SSE4_1 SSE4_2 I: svolume_mmx.c: Initialising MMX optimized functions. I: remap_mmx.c: Initialising MMX optimized remappers. I: svolume_sse.c: Initialising SSE2 optimized functions. I: remap_sse.c: Initialising SSE2 optimized remappers. I: sconv_sse.c: Initialising SSE2 optimized conversions. D: memblock.c: Using shared memory pool with 1024 slots of size 64.0 KiB each, total size is 64.0 MiB, maximum usable slot size is 65472 D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes.tdb' I: module-device-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes'. I: module.c: Loaded "module-device-restore" (index: #0; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes.tdb' I: module-stream-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes'. I: module.c: Loaded "module-stream-restore" (index: #1; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database.tdb' I: module-card-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database'. I: module.c: Loaded "module-card-restore" (index: #2; argument: ""). I: module.c: Loaded "module-augment-properties" (index: #3; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards. I: module.c: Loaded "module-udev-detect" (index: #4; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-bluetooth-discover.so': success D: dbus-util.c: Successfully connected to D-Bus system bus ba7c9a1f90b3d49d930bca2100000015 as :1.62 D: bluetooth-util.c: dbus: interface=org.freedesktop.DBus, path=/org/freedesktop/DBus, member=NameAcquired D: bluetooth-util.c: Bluetooth daemon is apparently not available. I: module.c: Loaded "module-bluetooth-discover" (index: #5; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-esound-protocol-unix.so': success I: module.c: Loaded "module-esound-protocol-unix" (index: #6; argument: ""). I: module.c: Loaded "module-native-protocol-unix" (index: #7; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-gconf.so': success I: module.c: Loaded "module-gconf" (index: #8; argument: ""). I: module-default-device-restore.c: Saved default sink 'auto_null' not existant, not restoring default sink setting. I: module-default-device-restore.c: Saved default source 'auto_null.monitor' not existant, not restoring default source setting. I: module.c: Loaded "module-default-device-restore" (index: #9; argument: ""). I: module.c: Loaded "module-rescue-streams" (index: #10; argument: ""). D: module-always-sink.c: Autoloading null-sink as no other sinks detected. I: sink.c: Created sink 0 "auto_null" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: sink.c: device.description = "Dummy Output" I: sink.c: device.class = "abstract" I: sink.c: device.icon_name = "audio-card" D: core-subscribe.c: Dropped redundant event due to change event. I: source.c: Created source 0 "auto_null.monitor" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: source.c: device.description = "Monitor of Dummy Output" I: source.c: device.class = "monitor" I: source.c: device.icon_name = "audio-input-microphone" D: module-null-sink.c: Thread starting up I: module.c: Loaded "module-null-sink" (index: #11; argument: "sink_name=auto_null sink_properties='device.description="Dummy Output"'"). I: module.c: Loaded "module-always-sink" (index: #12; argument: ""). I: module.c: Loaded "module-intended-roles" (index: #13; argument: ""). D: module-suspend-on-idle.c: Sink auto_null becomes idle, timeout in 5 seconds. I: module.c: Loaded "module-suspend-on-idle" (index: #14; argument: ""). I: client.c: Created 0 "ConsoleKit Session /org/freedesktop/ConsoleKit/Session1" D: module-console-kit.c: Added new session /org/freedesktop/ConsoleKit/Session1 I: module.c: Loaded "module-console-kit" (index: #15; argument: ""). I: module.c: Loaded "module-position-event-sounds" (index: #16; argument: ""). D: dbus-util.c: Successfully connected to D-Bus session bus efbffc6788fad56cfd64d40c00000018 as :1.182 D: main.c: Got org.pulseaudio.Server! I: main.c: Daemon startup complete. I: client.c: Created 1 "Native client (UNIX socket client)" I: client.c: Created 2 "Native client (UNIX socket client)" D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: module-augment-properties.c: Looking for .desktop file for gnome-volume-control-applet D: module-augment-properties.c: Looking for .desktop file for gnome-settings-daemon D: core-subscribe.c: Dropped redundant event due to change event. I: module-suspend-on-idle.c: Sink auto_null idle for too long, suspending ... D: sink.c: Suspend cause of sink auto_null is 0x0004, suspending Note the one section that seems to find the hardware but says it's busy (no idea if this is relevant). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards.

    Read the article

  • Remote Desktop Services Gateway Issue

    - by AVandelay05
    Alright fellow techies here's the rundown. I have installed Server 2008 r2 Remote Dekstop Services on a VM in my network. I installed the following RD role services: RD Session Host, Licensing, Connection Broker, Gateway, Web Access. When I set things up originally, the gateway server and RDWeb worked as it should locally. After getting things running locally (remoteserver.domainname.local) I wanted to test things externally. From the outside, I couldn't get things running (meaning I could connect to rdweb access externally, but when I tried to run an app I would get the message "can't connect/find computer"). Here's my setup for external access The VM has every RD Services role services installed on it, meaning it acts as gateway, rd web access, session host, licensing, the whole bit. I made a self-signed certificate on the gateway server (gateway.domainname.net is the cert name). Internally, I have a secondary forward-lookup zone called domainname.net with an A record gateway pointing to the local IP of the gateway server. On our public DNS (domainname.net) I have an A record gateway. This is to access the RDWeb externally. In IIS I have the following authentication settings RDWeb: All disabled except for anonymous authentication Rpc: All disabled except for basic and windows RpcWithCert: All disbled except for windows authentication I have the necessary web access config in our sonicwall tz210 (https and rdp, external ip pointing to local ip of rds server) RAP and CAP have the correct user and computer groups, authentication, and allowed devices After all of this, here's what happens accessing externally. I can login correctly to RDWeb Access (I've tried a bogus login, I can't login to it so that's working properly). I see the Apps for use. I click on an app, click connect, the credential window opens, I put in the correct user creds, it tries to connect to the gateway server, but then the cred window comes back in view. I tried to reach a limit of failed logins, but never reached one, haha. So from the same external client machine I try to connect to the gateway through a Remote Desktop connection. I put in the correct gateway settings in the RD window, try to connect and get the same results as I did in RDWeb access. I checked the event logs on the RD Services machine and saw the following event IDs around the time I tried to login externally: ID 6037 with the message "The program svchost.exe, with the assigned process ID 2168, could not authenticate locally by using the target name host/gateway.domainname.net. The target name used is not valid. A target name should refer to one of the local computer names, for example, the DNS host name. Try a different target name." ID 10 RADWebAccess "RD Web Access was unable to access gateway.domainname.net, which is the server that is specified as running the RemoteApp and Desktop Connection Management service. Ensure that the computer account of the RD Web Access server is a member of the TS Web Access Computers security group on gateway.domainname.net" ID 4625 "An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Administrator Account Domain: gateway.domainname.net Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: USER-LAPTOP Source Network Address: External IP Source Port: 63125 Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols." I don't think the VM has a null SID. The SID of the VM and it's physical host have different SIDS. I can access the blank page for rpc externally using the external gateway name. It seems like authentication is a problem. Also, is it a problem that the external name of the gateway server doesn't match the local name? The external name (which the cert is based on) is gateway.domainname.net and the internal name is remoteserver.domainname.local. That's the only thing I can think of that would be the problem, but the external name has to be different from the local right? Internally, I ping gateway.domainname.net and it gives me the correct local IP of the server. Now, there isn't an actual computer name in AD, but I don't know how I would achieve that? I hope I've been clear....any help would be appreciated. I think I'm close to achieving this. :)

    Read the article

  • How-to read data from selected tree node

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By default, the SelectionListener property of an ADF bound tree points to the makeCurrent method of the FacesCtrlHierBinding class in ADF to synchronize the current row in the ADF binding layer with the selected tree node. To customize the selection behavior, or just to read the selected node value in Java, you override the default configuration with an EL string pointing to a managed bean method property. In the following I show how you change the selection listener while preserving the default ADF selection behavior. To change the SelectionListener, select the tree component in the Structure Window and open the Oracle JDeveloper Property Inspector. From the context menu, select the Edit option to create a new listener method in a new or an existing managed bean. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For this example, I created a new managed bean. On tree node select, the managed bean code prints the selected tree node value(s) import java.util.List; import javax.el.ELContext; import javax.el.ExpressionFactory; import javax.el.MethodExpression; import javax.faces.application.Application; import javax.faces.context.FacesContext; import java.util.Iterator; import oracle.adf.view.rich.component.rich.data.RichTree; import oracle.jbo.Row; import oracle.jbo.uicli.binding.JUCtrlHierBinding; import oracle.jbo.uicli.binding.JUCtrlHierNodeBinding; import org.apache.myfaces.trinidad.event.SelectionEvent; import org.apache.myfaces.trinidad.model.CollectionModel; import org.apache.myfaces.trinidad.model.RowKeySet; import org.apache.myfaces.trinidad.model.TreeModel; public class TreeSampleBean { public TreeSampleBean() {} public void onTreeSelect(SelectionEvent selectionEvent) { //original selection listener set by ADF //#{bindings.allDepartments.treeModel.makeCurrent} String adfSelectionListener = "#{bindings.allDepartments.treeModel.makeCurrent}";   //make sure the default selection listener functionality is //preserved. you don't need to do this for multi select trees //as the ADF binding only supports single current row selection     /* START PRESERVER DEFAULT ADF SELECT BEHAVIOR */ FacesContext fctx = FacesContext.getCurrentInstance(); Application application = fctx.getApplication(); ELContext elCtx = fctx.getELContext(); ExpressionFactory exprFactory = application.getExpressionFactory();   MethodExpression me = null;   me = exprFactory.createMethodExpression(elCtx, adfSelectionListener,                                           Object.class, newClass[]{SelectionEvent.class});   me.invoke(elCtx, new Object[] { selectionEvent });     /* END PRESERVER DEFAULT ADF SELECT BEHAVIOR */   RichTree tree = (RichTree)selectionEvent.getSource(); TreeModel model = (TreeModel)tree.getValue();  //get selected nodes RowKeySet rowKeySet = selectionEvent.getAddedSet();   Iterator rksIterator = rowKeySet.iterator();   //for single select configurations,this only is called once   while (rksIterator.hasNext()) {     List key = (List)rksIterator.next();     JUCtrlHierBinding treeBinding = null;     CollectionModel collectionModel = (CollectionModel)tree.getValue();     treeBinding = (JUCtrlHierBinding)collectionModel.getWrappedData();     JUCtrlHierNodeBinding nodeBinding = null;     nodeBinding = treeBinding.findNodeByKeyPath(key);     Row rw = nodeBinding.getRow();     //print first row attribute. Note that in a tree you have to     //determine the node type if you want to select node attributes     //by name and not index      String rowType = rw.getStructureDef().getDefName();       if(rowType.equalsIgnoreCase("DepartmentsView")){      System.out.println("This row is a department: " +                          rw.getAttribute("DepartmentId"));     }     else if(rowType.equalsIgnoreCase("EmployeesView")){      System.out.println("This row is an employee: " +                          rw.getAttribute("EmployeeId"));     }        else{       System.out.println("Huh????");     }     // ... do more useful stuff here   } } -------------------- Download JDeveloper 11.1.2.1 Sample Workspace

    Read the article

  • Gathering statistics for an Oracle WebCenter Content Database

    - by Nicolas Montoya
    Have you ever heard: "My Oracle WebCenter Content instance is running slow. I checked the memory and CPU usage of the application server and it has plenty of resources. What could be going wrong?An Oracle WebCenter Content instance runs on an application server and relies on a database server on the back end. If your application server tier is running fine, chances are that your database server tier may host the root of the problem. While many things could cause performance problems, on active Enterprise Content Management systems, keeping database statistics updated is extremely important.The Oracle Database have a set of built-in optimizer utilities that can help make database queries more efficient. It is strongly recommended to update or re-create the statistics about the physical characteristics of a table and the associated indexes in order to maximize the efficiency of optimizers. These physical characteristics include: Number of records Number of pages Average record length The frequency with which you need to update statistics depends on how quickly the data is changing. Typically, statistics should be updated when the number of new items since the last update is greater than ten percent of the number of items when the statistics were last updated. If a large amount of documents are being added or removed from the system, the a post step should be added to gather statistics upon completion of this massive data change. In some cases, you may need to collect statistics in the middle of the data processing to expedite its execution. These proceses include but are not limited to: data migration, bootstrapping of a new system, records management disposition processing (typically at the end of the calendar year), etc. A DOCUMENTS table with a ten million rows will often generate a very different plan than a table with just a thousand.A quick check of the statistics for the WebCenter Content (WCC) Database could be performed via the below query:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE TABLE_NAME='DOCUMENTS';OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51This output will return not only the date when the WCC table DOCUMENTS was last analyzed, but also it will return the <DATABASE SCHEMA OWNER> for this table in the form of <PREFIX>_OCS.This database username could later on be used to check on other objects owned by the WCC <DATABASE SCHEMA OWNER> as shown below:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE OWNER='ATEAM_OCS'ORDER BY NUM_ROWS ASC;...OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      REVISIONS                            2051        46         141 04/09/2012 22:00:22ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51ATEAM_OCS                      ARCHIVEHISTORY                       4908       244         218 04/06/2012 11:17:49OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTHISTORY                      5865       110          72 04/06/2012 11:17:50ATEAM_OCS                      SCHEDULEDJOBSHISTORY                10131       244         131 04/06/2012 11:17:54ATEAM_OCS                      SCTACCESSLOG                        10204       496         268 04/06/2012 11:17:54...The Oracle Database allows to collect statistics of many different kinds as an aid to improving performance. The DBMS_STATS package is concerned with optimizer statistics only. The database sets automatic statistics collection of this kind on by default, DBMS_STATS package is intended for only specialized cases.The following subprograms gather certain classes of optimizer statistics:GATHER_DATABASE_STATS Procedures GATHER_DICTIONARY_STATS Procedure GATHER_FIXED_OBJECTS_STATS Procedure GATHER_INDEX_STATS Procedure GATHER_SCHEMA_STATS Procedures GATHER_SYSTEM_STATS Procedure GATHER_TABLE_STATS ProcedureThe DBMS_STATS.GATHER_SCHEMA_STATS PL/SQL Procedure gathers statistics for all objects in a schema.DBMS_STATS.GATHER_SCHEMA_STATS (    ownname          VARCHAR2,    estimate_percent NUMBER   DEFAULT to_estimate_percent_type                                                 (get_param('ESTIMATE_PERCENT')),    block_sample     BOOLEAN  DEFAULT FALSE,    method_opt       VARCHAR2 DEFAULT get_param('METHOD_OPT'),   degree           NUMBER   DEFAULT to_degree_type(get_param('DEGREE')),    granularity      VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),    cascade          BOOLEAN  DEFAULT to_cascade_type(get_param('CASCADE')),    stattab          VARCHAR2 DEFAULT NULL,    statid           VARCHAR2 DEFAULT NULL,    options          VARCHAR2 DEFAULT 'GATHER',    objlist          OUT      ObjectTab,   statown          VARCHAR2 DEFAULT NULL,    no_invalidate    BOOLEAN  DEFAULT to_no_invalidate_type (                                     get_param('NO_INVALIDATE')),  force             BOOLEAN DEFAULT FALSE);There are several values for the OPTIONS parameter that we need to know about: GATHER reanalyzes the whole schema     GATHER EMPTY only analyzes tables that have no existing statistics GATHER STALE only reanalyzes tables with more than 10 percent modifications (inserts, updates,   deletes) GATHER AUTO will reanalyze objects that currently have no statistics and objects with stale statistics. Using GATHER AUTO is like combining GATHER STALE and GATHER EMPTY. Example:exec dbms_stats.gather_schema_stats( -   ownname          => '<PREFIX>_OCS', -   options          => 'GATHER AUTO' -);

    Read the article

  • Using LINQ to Twitter OAuth with Windows 8

    - by Joe Mayo
    In previous posts, I explained how to use LINQ to Twitter with Windows 8, but the example was a Twitter Search, which didn’t require authentication. Much of the Twitter API requires authentication, so this post will explain how you can perform OAuth authentication with LINQ to Twitter in a Windows 8 Metro-style application. Getting Started I have earlier posts on how to create a Windows 8 app and add pages, so I’ll assume it isn’t necessary to repeat here. One difference is that I’m using Visual Studio 2012 RC and some of the terminology and/or library code might be slightly different.  Here are steps to get started: Create a new Windows metro style app, selecting the Blank App project template. Create a new Basic Page and name it OAuth.xaml.  Note: You’ll receive a prompt window for adding files and you should click Yes because those files are necessary for this demo. Add a new Basic Page named TweetPage.xaml. Open App.xaml.cs and change !rootFrame.Navigate(typeof(MainPage)) to !rootFrame.Navigate(typeof(TweetPage)). Now that the project is set up you’ll see the reason why authentication is required by setting up the TweetPage. Setting Up to Tweet a Status In this section, I’ll show you how to set up the XAML and code-behind for a tweet.  The tweet logic will check to see if the user is authenticated before performing the tweet. To tweet, I put a TextBox and Button on the XAML page. The following code omits most of the page, concentrating primarily on the elements of interest in this post: <StackPanel Grid.Row="1"> <TextBox Name="TweetTextBox" Margin="15" /> <Button Name="TweetButton" Content="Tweet" Click="TweetButton_Click" Margin="15,0" /> </StackPanel> Given the UI above, the user types the message they want to tweet, and taps Tweet. This invokes TweetButton_Click, which checks to see if the user is authenticated.  If the user is not authenticated, the app navigates to the OAuth page.  If they are authenticated, LINQ to Twitter does an UpdateStatus to post the user’s tweet.  Here’s the TweetButton_Click implementation: void TweetButton_Click(object sender, RoutedEventArgs e) { PinAuthorizer auth = null; if (SuspensionManager.SessionState.ContainsKey("Authorizer")) { auth = SuspensionManager.SessionState["Authorizer"] as PinAuthorizer; } if (auth == null || !auth.IsAuthorized) { Frame.Navigate(typeof(OAuthPage)); return; } var twitterCtx = new TwitterContext(auth); Status tweet = twitterCtx.UpdateStatus(TweetTextBox.Text); new MessageDialog(tweet.Text, "Successful Tweet").ShowAsync(); } For authentication, this app uses PinAuthorizer, one of several authorizers available in the LINQ to Twitter library. I’ll explain how PinAuthorizer works in the next section. What’s important here is that LINQ to Twitter needs an authorizer to post a Tweet. The code above checks to see if a valid authorizer is available. To do this, it uses the SuspensionManager class, which is part of the code generated earlier when creating OAuthPage.xaml. The SessionState property is a Dictionary<string, object> and I’m using the Authorizer key to store the PinAuthorizer.  If the user previously authorized during this session, the code reads the PinAuthorizer instance from SessionState and assigns it to the auth variable. If the user is authorized, auth would not be null and IsAuthorized would be true. Otherwise, the app navigates the user to OAuthPage.xaml, which I’ll discuss in more depth in the next section. When the user is authorized, the code passes the authorizer, auth, to the TwitterContext constructor. LINQ to Twitter uses the auth instance to build OAuth signatures for each interaction with Twitter.  You no longer need to write any more code to make this happen. The code above accepts the tweet just posted in the Status instance, tweet, and displays a message with the text to confirm success to the user. You can pull the PinAuthorizer instance from SessionState, instantiate your TwitterContext, and use it as you need. Just remember to make sure you have a valid authorizer, like the code above. As shown earlier, the code navigates to OAuthPage.xaml when a valid authorizer isn’t available. The next section shows how to perform the authorization upon arrival at OAuthPage.xaml. Doing the OAuth Dance This section shows how to authenticate with LINQ to Twitter’s built-in OAuth support. From the user perspective, they must be navigated to the Twitter authentication page, add credentials, be navigated to a Pin number page, and then enter that Pin in the Windows 8 application. The following XAML shows the relevant elements that the user will interact with during this process. <StackPanel Grid.Row="2"> <WebView x:Name="OAuthWebBrowser" HorizontalAlignment="Left" Height="400" Margin="15" VerticalAlignment="Top" Width="700" /> <TextBlock Text="Please perform OAuth process (above), enter Pin (below) when ready, and tap Authenticate:" Margin="15,15,15,5" /> <TextBox Name="PinTextBox" Margin="15,0,15,15" Width="432" HorizontalAlignment="Left" IsEnabled="False" /> <Button Name="AuthenticatePinButton" Content="Authenticate" Margin="15" IsEnabled="False" Click="AuthenticatePinButton_Click" /> </StackPanel> The WebView in the code above is what allows the user to see the Twitter authentication page. The TextBox is for entering the Pin, and the Button invokes code that will take the Pin and allow LINQ to Twitter to complete the authentication process. As you can see, there are several steps to OAuth authentication, but LINQ to Twitter tries to minimize the amount of code you have to write. The two important parts of the code to make this happen are the part that starts the authentication process and the part that completes the authentication process. The following code, from OAuthPage.xaml.cs, shows a couple events that are instrumental in making this process happen: public OAuthPage() { this.InitializeComponent(); this.Loaded += OAuthPage_Loaded; OAuthWebBrowser.LoadCompleted += OAuthWebBrowser_LoadCompleted; } The OAuthWebBrowser_LoadCompleted event handler enables UI controls when the browser is done loading – notice that the TextBox and Button in the previous XAML have their IsEnabled attributes set to False. When the Page.Loaded event is invoked, the OAuthPage_Loaded handler starts the OAuth process, shown here: void OAuthPage_Loaded(object sender, RoutedEventArgs e) { auth = new PinAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" }, UseCompression = true, GoToTwitterAuthorization = pageLink => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => OAuthWebBrowser.Navigate(new Uri(pageLink, UriKind.Absolute))) }; auth.BeginAuthorize(resp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (resp.Status) { case TwitterErrorStatus.Success: break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(resp.Error.ToString(), resp.Message).ShowAsync(); break; } })); } The PinAuthorizer, auth, a field of this class instantiated in the code above, assigns keys to the Credentials property. These are credentials that come from registering an application with Twitter, explained in the LINQ to Twitter documentation, Securing Your Applications. Notice how I use Dispatcher.RunAsync to marshal the web browser navigation back onto the UI thread. Internally, LINQ to Twitter invokes the lambda expression assigned to GoToTwitterAuthorization when starting the OAuth process.  In this case, we want the WebView control to navigate to the Twitter authentication page, which is defined with a default URL in LINQ to Twitter and passed to the GoToTwitterAuthorization lambda as pageLink. Then you need to start the authorization process by calling BeginAuthorize. This starts the OAuth dance, running asynchronously.  LINQ to Twitter invokes the callback assigned to the BeginAuthorize parameter, allowing you to take whatever action you need, based on the Status of the response, resp. As mentioned earlier, this is where the user performs the authentication process, enters the Pin, and clicks authenticate. The handler for authenticate completes the process and saves the authorizer for subsequent use by the application, as shown below: void AuthenticatePinButton_Click(object sender, RoutedEventArgs e) { auth.CompleteAuthorize( PinTextBox.Text, completeResp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (completeResp.Status) { case TwitterErrorStatus.Success: SuspensionManager.SessionState["Authorizer"] = auth; Frame.Navigate(typeof(TweetPage)); break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(completeResp.Error.ToString(), completeResp.Message).ShowAsync(); break; } })); } The PinAuthorizer CompleteAuthorize method takes two parameters: Pin and callback. The Pin is from what the user entered in the TextBox prior to clicking the Authenticate button that invoked this method. The callback handles the response from completing the OAuth process. The completeResp holds information about the results of the operation, indicated by a Status property of type TwitterErrorStatus. On success, the code assigns auth to SessionState. You might remember SessionState from the previous description of TweetPage – this is where the valid authorizer comes from. After saving the authorizer, the code navigates the user back to TweetPage, where they can type in a message, click the Tweet button, and observe that they have successfully tweeted. Summary You’ve seen how to get started with using LINQ to Twitter in a Metro-style application. The generated code contained a SuspensionManager class with way to manage information across multiple pages via its SessionState property. You also saw how LINQ to Twitter performs authorization in two steps of starting the process and completing the process when the user provides a Pin number. Remember to marshal callback thread back onto the UI – you saw earlier how to use Dispatcher.RunAsync to accomplish this. There were a few steps in the process, but LINQ to Twitter did minimize the amount of code you needed to write to make it happen. You can download the MetroOAuthDemo.zip sample on the LINQ to Twitter Samples Page.   @JoeMayo

    Read the article

  • Reading OpenDocument spreadsheets using C#

    - by DigiMortal
    Excel with its file formats is not the only spreadsheet application that is widely used. There are also users on Linux and Macs and often they are using OpenOffice and other open-source office packages that use ODF instead of OpenXML. In this post I will show you how to read Open Document spreadsheet in C#. Importer as example My previous post about importers showed you how to build flexible importers support to your web application. This post introduces you practical example of one of my importers. Of course, sensitive code is omitted. We start with ODS importer class and we add new methods as we go. public class OdsImporter : ImporterBase {     public OdsImporter()     {     }       public override string[] SupportedFileExtensions     {         get { return new[] { "ods" }; }     }       public override ImportResult Import(Stream fileStream, long companyId, short year)     {         string contentXml = GetContentXml(fileStream);           var result = new ImportResult();         var doc = XDocument.Parse(contentXml);           var rows = doc.Descendants("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-row").Skip(1);           foreach (var row in rows)         {             ImportRow(row, companyId, year, result);         }           return result;     } } The class given here just extends base class for importers (previous post uses interface but as I already told there you move to abstract base class when writing code for real projects). Import method reads data from *.ods file, parses it (it is XML), finds all data rows and imports data. As you may see then first row is skipped. This is because the first row on my sheet is always headers row. Reading ODS file Our import method starts with getting XML from *.ods file. ODS files like OpenXml files are zipped containers that contain different files. We need content.xml as all data is kept there. To get the contents of file we use SharpZipLib library to read uploaded file as *.zip file. private static string GetContentXml(Stream fileStream) {     var contentXml = "";       using (var zipInputStream = new ZipInputStream(fileStream))     {         ZipEntry contentEntry = null;         while ((contentEntry = zipInputStream.GetNextEntry()) != null)         {             if (!contentEntry.IsFile)                 continue;             if (contentEntry.Name.ToLower() == "content.xml")                 break;         }           if (contentEntry.Name.ToLower() != "content.xml")         {             throw new Exception("Cannot find content.xml");         }           var bytesResult = new byte[] { };         var bytes = new byte[2000];         var i = 0;           while ((i = zipInputStream.Read(bytes, 0, bytes.Length)) != 0)         {             var arrayLength = bytesResult.Length;             Array.Resize<byte>(ref bytesResult, arrayLength + i);             Array.Copy(bytes, 0, bytesResult, arrayLength, i);         }         contentXml = Encoding.UTF8.GetString(bytesResult);     }     return contentXml; } If here is content.xml file then we stop browsing the file. We read this file to memory and return it as UTF-8 format string. Importing rows Our last task is to import rows. We use special method for this as we have to handle some tricks here. To keep files smaller the cell count on row is not always the same. If we have more than one empty cell one after another then ODS keeps only one cell for sequential empty cells. This cell has attribute called number-columns-repeated and it’s value is set to the number of sequential empty cells. This is why we use two indexers for cells collection. private void ImportRow(XElement row, ImportResult result) {     var cells = (from c in row.Descendants()                 where c.Name == "{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-cell"                 select c).ToList();       var dto = new DataDto();       var count = cells.Count;     var j = -1;       for (var i = 0; i < count; i++)     {         j++;         var cell = cells[i];         var attr = cell.Attribute("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}number-columns-repeated");         if (attr != null)         {             var numToSkip = 0;             if (int.TryParse(attr.Value, out numToSkip))             {                 j += numToSkip - 1;             }         }           if (i > 30) break;         if (j == 0)         {             dto.SomeProperty = cells[i].Value;         }         if (j == 1)         {             dto.SomeOtherProperty = cells[i].Value;         }         // some more data reading     }       // save data } You can define your own class for import results and add there all problems found during data import. Your application gets the results and shows them to user. Conclusion Reading ODS files may seem to complex task but actually it is very easy if we need only data from those documents. We can use some zip-library to get the content file and then parse it to XML. It is not hard to go through the XML but there are some optimization tricks we have to know. The code here is safe to use in web applications as it is not using any API-s that may have special needs to server and infrastructure.

    Read the article

  • How to trace a function array argument in DTrace

    - by uejio
    I still use dtrace just about every day in my job and found that I had to print an argument to a function which was an array of strings.  The array was variable length up to about 10 items.  I'm not sure if the is the right way to do it, but it seems to work and is not too painful if the array size is small.Here's an example.  Suppose in your application, you have the following function, where n is number of item in the array s.void arraytest(int n, char **s){    /* Loop thru s[0] to s[n-1] */}How do you use DTrace to print out the values of s[i] or of s[0] to s[n-1]?  DTrace does not have if-then blocks or for loops, so you can't do something like:    for i=0; i<arg0; i++        trace arg1[i]; It turns out that you can use probe ordering as a kind of iterator. Probes with the same name will fire in the order that they appear in the script, so I can save the value of "n" in the first probe and then use it as part of the predicate of the next probe to determine if the other probe should fire or not.  So the first probe for tracing the arraytest function is:pid$target::arraytest:entry{    self->n = arg0;}Then, if I want to print out the first few items of the array, I first check the value of n.  If it's greater than the index that I want to print out, then I can print that index.  For example, if I want to print out the 3rd element of the array, I would do something like:pid$target::arraytest:entry/self->n > 2/{    printf("%s",stringof(arg1 + 2 * sizeof(pointer)));}Actually, that doesn't quite work because arg1 is a pointer to an array of pointers and needs to be copied twice from the user process space to the kernel space (which is where dtrace is). Also, the sizeof(char *) is 8, but for some reason, I have to use 4 which is the sizeof(uint32_t). (I still don't know how that works.)  So, the script that prints the 3rd element of the array should look like:pid$target::arraytest:entry{    /* first, save the size of the array so that we don't get            invalid address errors when indexing arg1+n. */    self->n = arg0;}pid$target::arraytest:entry/self->n > 2/{    /* print the 3rd element (index = 2) of the second arg. */    i = 2;    size = 4;    self->a_t = copyin(arg1+size*i,size);    printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}If your array is large, then it's quite painful since you have to write one probe for every array index.  For example, here's the full script for printing the first 5 elements of the array:#!/usr/sbin/dtrace -spid$target::arraytest:entry{        /* first, save the size of the array so that we don't get           invalid address errors when indexing arg1+n. */        self->n = arg0;}pid$target::arraytest:entry/self->n > 0/{        i = 0;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 1/{        i = 1;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 2/{        i = 2;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 3/{        i = 3;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 4/{        i = 4;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));} If the array is large, then your script will also have to be very long to print out all values of the array.

    Read the article

  • ??OPEN CURSOR?BULK COLLECT

    - by Liu Maclean(???)
    ????T.askmaclean.com?????bulk collect?open cursor???, ?????????  ??????: ???? OPEN_CURSOR ????SQL?? ???????. ?????? ????? ???????????????? ????? test_soruce create table zengfankun_temp01 as select * from dba_objects;select count(*) from zengfankun_temp01;–12,6826analyze table zengfankun_temp01 compute statistics; create or replace procedure test_open_cursor istype type_owner is table of zengfankun_temp01.owner%type index by binary_integer;type type_object_name is table of zengfankun_temp01.object_name%type index by binary_integer;type type_object_id is table of zengfankun_temp01.object_id%type index by binary_integer;type type_object_type is table of zengfankun_temp01.object_type%type index by binary_integer;type type_last_ddl_time is table of zengfankun_temp01.last_ddl_time%type index by binary_integer; l_ary_owner type_owner;l_ary_object_name type_object_name;l_ary_object_id type_object_id;l_ary_object_type type_object_type;l_ary_last_ddl_time type_last_ddl_time; cursor cur_object isselect owner,object_name,object_id,object_type,last_ddl_timefrom zengfankun_temp01order by owner,object_name,object_type,last_ddl_time;OPEN_START number;OPEN_END number;FETCH_START number;FETCH_END number;beginDBMS_OUTPUT.ENABLE (buffer_size=>null) ;OPEN_START:=dbms_utility.get_time();open cur_object;OPEN_END :=dbms_utility.get_time();dbms_output.put_line(‘OPEN_TIME:’||TO_CHAR(OPEN_END-OPEN_START));loopFETCH_START:=dbms_utility.get_time();fetch cur_object bulk collect intol_ary_owner,l_ary_object_name,l_ary_object_id,l_ary_object_type,l_ary_last_ddl_timelimit 10000;FETCH_END:=dbms_utility.get_time();dbms_output.put_line(‘FETCH_TIME:’||TO_CHAR(FETCH_END-FETCH_START)||’ ROWCOUNT:’||cur_object%rowCount); exit when cur_object%notfound or cur_object%notfound is null;end loop;end test_open_cursor; OPEN_TIME:12FETCH_TIME:21 ROWCOUNT:10000FETCH_TIME:3 ROWCOUNT:20000FETCH_TIME:3 ROWCOUNT:30000FETCH_TIME:3 ROWCOUNT:40000FETCH_TIME:3 ROWCOUNT:50000FETCH_TIME:3 ROWCOUNT:60000FETCH_TIME:3 ROWCOUNT:70000FETCH_TIME:3 ROWCOUNT:80000FETCH_TIME:3 ROWCOUNT:90000FETCH_TIME:3 ROWCOUNT:100000FETCH_TIME:3 ROWCOUNT:110000FETCH_TIME:3 ROWCOUNT:120000FETCH_TIME:1 ROWCOUNT:126826 ???? OPEN_TIME:0FETCH_TIME:18 ROWCOUNT:10000FETCH_TIME:3 ROWCOUNT:20000FETCH_TIME:3 ROWCOUNT:30000FETCH_TIME:3 ROWCOUNT:40000FETCH_TIME:3 ROWCOUNT:50000FETCH_TIME:3 ROWCOUNT:60000FETCH_TIME:3 ROWCOUNT:70000FETCH_TIME:3 ROWCOUNT:80000FETCH_TIME:3 ROWCOUNT:90000FETCH_TIME:3 ROWCOUNT:100000FETCH_TIME:3 ROWCOUNT:110000FETCH_TIME:3 ROWCOUNT:120000FETCH_TIME:2 ROWCOUNT:126826 SQL?????????, ????????????.??OPEN CURSOR ????0???????????3??.??N? ??????. ???? ?N? ?????????? ??????. ??????????????? ??????????. ?????????10000??? ???????????????????clear???, ???????????: ?OPEN CURSOR ?????, PL/SQL????SQL????PARSE SQL????????, ??????OPEN CURSOR????SNAPSHOT SCN ??SCN, ??Oracle?????FETCH?????,???????????????? ????FETCH ??????????????,???????Current Block, The most recent version of block , ?????SCN >> Snapshot scn, ????UNDO???? ???SCN ???Best Block ,???Read Consistentcy;???? ???UNDO SNAPSHOT???????????????Best Block??,???????ORA-1555??? ????????, ??????????,???????????????char(2000)????, ???????????????,????bulk collect fetch??fetch 10 ???,????????OPEN CURSOR?????PARSE??SQL????????, ??????????fetch bulk collect??????????10????,??”_trace_pin_time”????Server Process?pin CR block???,??????????Fetch Bulk Collect limit 10??10?buffer?pin? [oracle@nas ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Wed Aug 1 11:36:52 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- http://www.askmaclean.com SQL> create table maclean (t1 char(2000)) tablespace users pctfree 99; Table created. SQL> begin 2 for i in 1..200 loop 3 insert into maclean values('MACLEAN'); 4 commit ; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. SQL> exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> select count(*) from maclean; COUNT(*) ---------- 200 SQL> select blocks,num_rows from dba_tables where table_name='MACLEAN'; BLOCKS NUM_ROWS ---------- ---------- 244 200 SQL> alter system set "_trace_pin_time"=1 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> alter session set events '10046 trace name context forever,level 12'; Session altered. SQL> SQL> SQL> declare 2 cursor v_cursor is 3 select * from sys.maclean; 4 type v_type is table of sys.maclean%rowtype index by binary_integer; 5 rec_tab v_type; 6 begin 7 open v_cursor; 8 dbms_lock.sleep(30); 9 loop 10 fetch v_cursor bulk collect 11 into rec_tab limit 10; 12 dbms_lock.sleep(10); 13 exit when v_cursor%notfound; 14 end loop; 15 end; 16 / ?????10046 trace+ pin trace: PARSING IN CURSOR #47499559136872 len=337 dep=0 uid=0 oct=47 lid=0 tim=1343836146412056 hv=496860239 ad='11a11dbb0' sqlid='4zh7954ftuz2g' declare cursor v_cursor is select * from sys.maclean; type v_type is table of sys.maclean%rowtype index by binary_integer; rec_tab v_type; begin open v_cursor; dbms_lock.sleep(30); loop fetch v_cursor bulk collect into rec_tab limit 10; dbms_lock.sleep(10); exit when v_cursor%notfound; end loop; end; END OF STMT PARSE #47499559136872:c=0,e=346,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=1343836146412051 ===================== PARSING IN CURSOR #47499559126280 len=25 dep=1 uid=0 oct=3 lid=0 tim=1343836146414939 hv=3296884535 ad='11a11d250' sqlid='2mb1493284xtr' SELECT * FROM SYS.MACLEAN END OF STMT PARSE #47499559126280:c=1999,e=2427,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=2568761675,tim=1343836146414937 EXEC #47499559126280:c=0,e=55,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2568761675,tim=1343836146415104 ????? ? SELECT * FROM SYS.MACLEAN? PARSE ????? , ????FETCH???????pin ????????, ????OPEN CURSOR????? *** 2012-08-01 11:49:36.424 WAIT #47499559136872: nam='PL/SQL lock timer' ela= 30009361 duration=0 p2=0 p3=0 obj#=-1 tim=1343836176424782 ???30s pin ktewh26: kteinpscan dba 0x10a6202:4 time 1039048805 pin ktewh27: kteinmap dba 0x10a6202:4 time 1039048847 pin kdswh11: kdst_fetch dba 0x10a6203:1 time 1039048898 pin kdswh11: kdst_fetch dba 0x10a6204:1 time 1039048961 pin kdswh11: kdst_fetch dba 0x10a6205:1 time 1039049004 pin kdswh11: kdst_fetch dba 0x10a6206:1 time 1039049042 pin kdswh11: kdst_fetch dba 0x10a6207:1 time 1039049089 pin kdswh11: kdst_fetch dba 0x10a6208:1 time 1039049123 pin kdswh11: kdst_fetch dba 0x10a6209:1 time 1039049159 pin kdswh11: kdst_fetch dba 0x10a620a:1 time 1039049191 pin kdswh11: kdst_fetch dba 0x10a620b:1 time 1039049225 pin kdswh11: kdst_fetch dba 0x10a620c:1 time 1039049260 kdst_fetch???fetch??????? , ??fetch?10?? ???????FETCH FETCH #47499559126280:c=0,e=536,p=0,cr=12,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836176425542 *** 2012-08-01 11:49:46.428 WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002694 duration=0 p2=0 p3=0 obj#=-1 tim=134383618642829 ????10s pin kdswh11: kdst_fetch dba 0x10a620d:1 time 1049052211 pin kdswh11: kdst_fetch dba 0x10a620e:1 time 1049052264 pin kdswh11: kdst_fetch dba 0x10a620f:1 time 1049052299 pin kdswh11: kdst_fetch dba 0x10a6211:1 time 1049052332 pin kdswh11: kdst_fetch dba 0x10a6212:1 time 1049052364 pin kdswh11: kdst_fetch dba 0x10a6213:1 time 1049052398 pin kdswh11: kdst_fetch dba 0x10a6214:1 time 1049052430 pin kdswh11: kdst_fetch dba 0x10a6215:1 time 1049052462 pin kdswh11: kdst_fetch dba 0x10a6216:1 time 1049052494 pin kdswh11: kdst_fetch dba 0x10a6217:1 time 1049052525 FETCH #47499559126280:c=0,e=371,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836186428807 ??pin 10????, ???fetch ?? WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002864 duration=0 p2=0 p3=0 obj#=-1 tim=1343836196431754 pin kdswh11: kdst_fetch dba 0x10a6218:1 time 1059055662 pin kdswh11: kdst_fetch dba 0x10a6219:1 time 1059055714 pin kdswh11: kdst_fetch dba 0x10a621a:1 time 1059055748 pin kdswh11: kdst_fetch dba 0x10a621b:1 time 1059055781 pin kdswh11: kdst_fetch dba 0x10a621c:1 time 1059055815 pin kdswh11: kdst_fetch dba 0x10a621d:1 time 1059055848 pin kdswh11: kdst_fetch dba 0x10a621e:1 time 1059055883 pin kdswh11: kdst_fetch dba 0x10a621f:1 time 1059055915 pin kdswh11: kdst_fetch dba 0x10a6221:1 time 1059055953 pin kdswh11: kdst_fetch dba 0x10a6222:1 time 1059055992 FETCH #47499559126280:c=0,e=385,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836196432274 ???? ??????? DBA????? ............................ ???? WAIT #47499559136872: nam='PL/SQL lock timer' ela= 10002933 duration=0 p2=0 p3=0 obj#=-1 tim=1343836366495589 pin kdswh11: kdst_fetch dba 0x10a62f6:1 time 1229119497 pin kdswh11: kdst_fetch dba 0x10a62f7:1 time 1229119545 pin kdswh11: kdst_fetch dba 0x10a62f8:1 time 1229119576 pin kdswh11: kdst_fetch dba 0x10a62f9:1 time 1229119610 pin kdswh11: kdst_fetch dba 0x10a62fa:1 time 1229119644 pin kdswh11: kdst_fetch dba 0x10a62fb:1 time 1229119671 pin kdswh11: kdst_fetch dba 0x10a62fc:1 time 1229119703 pin kdswh11: kdst_fetch dba 0x10a62fd:1 time 1229119730 pin kdswh11: kdst_fetch dba 0x10a62fe:1 time 1229119760 pin kdswh11: kdst_fetch dba 0x10a62ff:1 time 1229119787 FETCH #47499559126280:c=0,e=340,p=0,cr=10,cu=0,mis=0,r=10,dep=1,og=1,plh=2568761675,tim=1343836366496067 ??????DBA? 0x10a6203 , ??DBA ? 0x10a62ff ???????DBA??MACLEAN????????,???DBA???Maclean????? getbfno?????dba??????????? CREATE OR REPLACE FUNCTION getbfno (p_dba IN VARCHAR2) RETURN VARCHAR2 IS l_str VARCHAR2 (255) DEFAULT NULL; l_fno VARCHAR2 (15); l_bno VARCHAR2 (15); BEGIN l_fno := DBMS_UTILITY.data_block_address_file (TO_NUMBER (LTRIM (p_dba, '0x'), 'xxxxxxxx' ) ); l_bno := DBMS_UTILITY.data_block_address_block (TO_NUMBER (LTRIM (p_dba, '0x'), 'xxxxxxxx' ) ); l_str := 'datafile# is:' || l_fno || CHR (10) || 'datablock is:' || l_bno || CHR (10) || 'dump command:alter system dump datafile ' || l_fno || ' block ' || l_bno || ';'; RETURN l_str; END; / Function created. SQL> select getbfno('0x10a6203') from dual; GETBFNO('0X10A6203') -------------------------------------------------------------------------------- datafile# is:4 datablock is:680451 dump command:alter system dump datafile 4 block 680451; SQL> select getbfno('0x10a62ff') from dual; GETBFNO('0X10A62FF') -------------------------------------------------------------------------------- datafile# is:4 datablock is:680703 dump command:alter system dump datafile 4 block 680703; SQL> select dbms_rowid.rowid_block_number(min(rowid)),dbms_rowid.rowid_relative_fno(min(rowid)) from maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(MIN(ROWID)) ----------------------------------------- DBMS_ROWID.ROWID_RELATIVE_FNO(MIN(ROWID)) ----------------------------------------- 680451 4 SQL> select dbms_rowid.rowid_block_number(max(rowid)),dbms_rowid.rowid_relative_fno(max(rowid)) from maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(MAX(ROWID)) ----------------------------------------- DBMS_ROWID.ROWID_RELATIVE_FNO(MAX(ROWID)) ----------------------------------------- 680703 4 ???????3???: 1.?OPEN CURSOR ?????, PL/SQL????SQL????PARSE SQL????????, ??????OPEN CURSOR????SNAPSHOT SCN ??SCN, ??Oracle?????FETCH?????,???????????????? 2.????FETCH ?????????????? 3. ???open cursor+ fetch bulk collect???”?????????”

    Read the article

  • How to Remove Header in LaTex

    - by Tim
    Hi, I would like to not show the name of each chapter on the header of its each page. I also like to have nothing in the headers for abstract, acknowledgement, table of content, list of figures and list of tables. But currently I have header on each page, for example: Here is my code when specifying these parts in my tex file \begin{abstract} ... \end{abstract} \begin{acknowledgement} ... \end{acknowledgement} % generate table of contents \tableofcontents % generate list of tables \listoftables % generate list of figures \listoffigures \chapter{Introduction} \label{chp1} %% REFERENCES \appendix \input{appendiximages.tex} \bibliographystyle{plain} %%\bibliographystyle{abbrvnat} \bibliography{thesis} Here is what I believe to be the definition of the commands. More details can be found in these two files jhu12.clo and thesis.cls. % \chapter: \def\chaptername{Chapter} % ABSTRACT % MODIFIED to include section name in headers \def\abstract{ \newpage \dsp \chapter*{\abstractname\@mkboth{\uppercase{\abstractname}}{\uppercase{\abstractname}}} \fmfont \vspace{8pt} \addcontentsline{toc}{chapter}{\abstractname} } \def\endabstract{\par\vfil\null} % DEDICATION % Modified to make dedication its own section %\newenvironment{dedication} %{\begin{alwayssingle}} %{\end{alwayssingle}} \def\dedication{ \newpage \dsp \chapter*{Dedication\@mkboth{DEDICATION}{DEDICATION}} \fmfont} \def\endacknowledgement{\par\vfil\null} % ACKNOWLEDGEMENTS % MODIFIED to include section name in headers \def\acknowledgement{ \newpage \dsp \chapter*{\acknowledgename\@mkboth{\uppercase{\acknowledgename}}{\uppercase{\acknowledgename}}} \fmfont \addcontentsline{toc}{chapter}{\acknowledgename} } \def\endacknowledgement{\par\vfil\null} \def\thechapter {\arabic{chapter}} % \@chapapp is initially defined to be '\chaptername'. The \appendix % command redefines it to be '\appendixname'. % \def\@chapapp{\chaptername} \def\chapter{ \clearpage \thispagestyle{plain} \if@twocolumn % IF two-column style \onecolumn % THEN \onecolumn \@tempswatrue % @tempswa := true \else \@tempswafalse % ELSE @tempswa := false \fi \dsp % double spacing \secdef\@chapter\@schapter} % TABLEOFCONTENTS % In ucthesis style, \tableofcontents, \listoffigures, etc. are always % set in single-column style. @restonecol \def\tableofcontents{\@restonecolfalse \if@twocolumn\@restonecoltrue\onecolumn\fi %%%%% take care of getting page number in right spot %%%%% \clearpage % starts new page \thispagestyle{botcenter} % Page style of frontmatter is botcenter \global\@topnum\z@ % Prevents figures from going at top of page %%%%% \@schapter{\contentsname \@mkboth{\uppercase{\contentsname}}{\uppercase{\contentsname}}}% {\ssp\@starttoc{toc}}\if@restonecol\twocolumn\fi} \def\l@part#1#2{\addpenalty{-\@highpenalty}% \addvspace{2.25em plus\p@}% space above part line \begingroup \@tempdima 3em % width of box holding part number, used by \parindent \z@ \rightskip \@pnumwidth %% \numberline \parfillskip -\@pnumwidth {\large \bfseries % set line in \large boldface \leavevmode % TeX command to enter horizontal mode. #1\hfil \hbox to\@pnumwidth{\hss #2}}\par \nobreak % Never break after part entry \global\@nobreaktrue %% Added 24 May 89 as \everypar{\global\@nobreakfalse\everypar{}}%% suggested by %% Jerry Leichter \endgroup} % LIST OF FIGURES % % Single-space list of figures, add it to the table of contents. \def\listoffigures{\@restonecolfalse \if@twocolumn\@restonecoltrue\onecolumn\fi %%%%% take care of getting page number in right spot %%%%% \clearpage \thispagestyle{botcenter} % Page style of frontmatter is botcenter \global\@topnum\z@ % Prevents figures from going at top of page. \@schapter{\listfigurename\@mkboth{\uppercase{\listfigurename}}% {\uppercase{\listfigurename}}} \addcontentsline{toc}{chapter}{\listfigurename} {\ssp\@starttoc{lof}}\if@restonecol\twocolumn\fi} \def\l@figure{\@dottedtocline{1}{1.5em}{2.3em}} % bibliography \def\thebibliography#1{\chapter*{\bibname\@mkboth {\uppercase{\bibname}}{\uppercase{\bibname}}} \addcontentsline{toc}{chapter}{\bibname} \list{\@biblabel{\arabic{enumiv}}}{\settowidth\labelwidth{\@biblabel{#1}}% \leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumiv}% \let\p@enumiv\@empty \def\theenumiv{\arabic{enumiv}}}% \def\newblock{\hskip .11em plus.33em minus.07em}% \sloppy\clubpenalty4000\widowpenalty4000 \sfcode`\.=\@m} Thanks and regards! EDIT: I just replaced \thispagestyle{botcenter} with \thispagestyle{plain}. The latter is said to clear the header (http://en.wikibooks.org/wiki/LaTeX/Page_Layout), but it does not. How shall I do? Thanks!

    Read the article

  • How to convert ISampleGrabber::BufferCB's buffer to a bitmap

    - by user2509919
    I am trying to use the ISampleGrabberCB::BufferCB to convert the current frame to bitmap using the following code: int ISampleGrabberCB.BufferCB(double sampleTime, IntPtr buffer, int bufferLength) { try { Form1 form1 = new Form1("", "", ""); if (pictureReady == null) { Debug.Assert(bufferLength == Math.Abs(pitch) * videoHeight, "Wrong Buffer Length"); } Debug.Assert(imageBuffer != IntPtr.Zero, "Remove Buffer"); Bitmap bitmapOfCurrentFrame = new Bitmap(width, height, capturePitch, PixelFormat.Format24bppRgb, buffer); MessageBox.Show("Works"); form1.changepicturebox3(bitmapOfCurrentFrame); pictureReady.Set(); } catch (Exception ex) { MessageBox.Show(ex.ToString()); } return 0; } However this does not seem to be working. Additionally it seems to call this function when i press a button which runs the following code: public IntPtr getFrame() { int hr; try { pictureReady.Reset(); } catch (Exception ex) { MessageBox.Show(ex.ToString()); } imageBuffer = Marshal.AllocCoTaskMem(Math.Abs(pitch) * videoHeight); try { gotFrame = true; if (videoControl != null) { hr = videoControl.SetMode(stillPin, VideoControlFlags.Trigger); DsError.ThrowExceptionForHR(hr); } if (!pictureReady.WaitOne(9000, false)) { throw new Exception("Timeout waiting to get picture"); } } catch { Marshal.FreeCoTaskMem(imageBuffer); imageBuffer = IntPtr.Zero; } return imageBuffer; } Once this code is ran I get a message box which shows 'Works' thus meaning my BufferCB must of been called however does not update my picture box with the current image. Is the BufferCB not called after every new frame? If so why do I not recieve the 'Works' message box? Finally is it possible to convert every new frame into a bitmap (this is used for later processing) using BufferCB and if so how? Edited code: int ISampleGrabberCB.BufferCB(double sampleTime, IntPtr buffer, int bufferLength) { Debug.Assert(bufferLength == Math.Abs(pitch) * videoHeight, "Wrong Buffer Length"); Debug.Assert(imageBuffer != IntPtr.Zero, "Remove Buffer"); CopyMemory(imageBuffer, buffer, bufferLength); Decode(buffer); return 0; } public Image Decode(IntPtr imageData) { var bitmap = new Bitmap(width, height, pitch, PixelFormat.Format24bppRgb, imageBuffer); bitmap.RotateFlip(RotateFlipType.RotateNoneFlipY); Form1 form1 = new Form1("", "", ""); form1.changepicturebox3(bitmap); bitmap.Save("C:\\Users\\...\\Desktop\\A2 Project\\barcode.jpg"); return bitmap; } Button code: public void getFrameFromWebcam() { if (iPtr != IntPtr.Zero) { Marshal.FreeCoTaskMem(iPtr); iPtr = IntPtr.Zero; } //Get Image iPtr = sampleGrabberCallBack.getFrame(); Bitmap bitmapOfFrame = new Bitmap(sampleGrabberCallBack.width, sampleGrabberCallBack.height, sampleGrabberCallBack.capturePitch, PixelFormat.Format24bppRgb, iPtr); bitmapOfFrame.RotateFlip(RotateFlipType.RotateNoneFlipY); barcodeReader(bitmapOfFrame); } public IntPtr getFrame() { int hr; try { pictureReady.Reset(); } catch (Exception ex) { MessageBox.Show(ex.ToString()); } imageBuffer = Marshal.AllocCoTaskMem(Math.Abs(pitch) * videoHeight); try { gotFrame = true; if (videoControl != null) { hr = videoControl.SetMode(stillPin, VideoControlFlags.Trigger); DsError.ThrowExceptionForHR(hr); } if (!pictureReady.WaitOne(9000, false)) { throw new Exception("Timeout waiting to get picture"); } } catch { Marshal.FreeCoTaskMem(imageBuffer); imageBuffer = IntPtr.Zero; } return imageBuffer; } I also still need to press the button to run the BufferCB Thanks for reading.

    Read the article

  • How to fix basicHttpBinding in WCF when using multiple proxy clients?

    - by Hemant
    [Question seems a little long but please have patience. It has sample source to explain the problem.] Consider following code which is essentially a WCF host: [ServiceContract (Namespace = "http://www.mightycalc.com")] interface ICalculator { [OperationContract] int Add (int aNum1, int aNum2); } [ServiceBehavior (InstanceContextMode = InstanceContextMode.PerCall)] class Calculator: ICalculator { public int Add (int aNum1, int aNum2) { Thread.Sleep (2000); //Simulate a lengthy operation return aNum1 + aNum2; } } class Program { static void Main (string[] args) { try { using (var serviceHost = new ServiceHost (typeof (Calculator))) { var httpBinding = new BasicHttpBinding (BasicHttpSecurityMode.None); serviceHost.AddServiceEndpoint (typeof (ICalculator), httpBinding, "http://172.16.9.191:2221/calc"); serviceHost.Open (); Console.WriteLine ("Service is running. ENJOY!!!"); Console.WriteLine ("Type 'stop' and hit enter to stop the service."); Console.ReadLine (); if (serviceHost.State == CommunicationState.Opened) serviceHost.Close (); } } catch (Exception e) { Console.WriteLine (e); Console.ReadLine (); } } } Also the WCF client program is: class Program { static int COUNT = 0; static Timer timer = null; static void Main (string[] args) { var threads = new Thread[10]; for (int i = 0; i < threads.Length; i++) { threads[i] = new Thread (Calculate); threads[i].Start (null); } timer = new Timer (o => Console.WriteLine ("Count: {0}", COUNT), null, 1000, 1000); Console.ReadLine (); timer.Dispose (); } static void Calculate (object state) { var c = new CalculatorClient ("BasicHttpBinding_ICalculator"); c.Open (); while (true) { try { var sum = c.Add (2, 3); Interlocked.Increment (ref COUNT); } catch (Exception ex) { Console.WriteLine ("Error on thread {0}: {1}", Thread.CurrentThread.Name, ex.GetType ()); break; } } c.Close (); } } Basically, I am creating 10 proxy clients and then repeatedly calling Add service method on separate threads. Now if I run both applications and observe opened TCP connections using netstat, I find that: If both client and server are running on same machine, number of tcp connections are equal to number of proxy objects. It means all requests are being served in parallel. Which is good. If I run server on a separate machine, I observed that maximum 2 TCP connections are opened regardless of the number of proxy objects I create. Only 2 requests run in parallel. It hurts the processing speed badly. If I switch to net.tcp binding, everything works fine (a separate TCP connection for each proxy object even if they are running on different machines). I am very confused and unable to make the basicHttpBinding use more TCP connections. I know it is a long question, but please help!

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >