Search Results

Search found 882 results on 36 pages for 'concurrency coordination'.

Page 25/36 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Friday, May 14, 2010

    CodePlex Daily Summary for Friday, May 14, 2010New ProjectsCampfire#: Campfire# is a campfire client written in .NET 4.0 using WPF, which uses the Campfire API.CHESS: Systematic Concurrency Testing: CHESS is a tool for systematic and disciplined concurrency testing. Given a concurrent test, CHESS systematically enumerates the possible thread sc...cmpp: cmppcycloid: Arcanoid gameDotNetNuke® C#: The DotNetNuke® project is developed and maintained on a Visual Basic codebase, however a C# version has always been a popular request. This is a ...EasyBuildingCMS.NET: EasyBuildingCMS is an easy use content management system.fluidCMS: Provide for flexible management of web content that is not tightly integrated with the layout and rendering of sites that consume the content.Golem: An automation tool oriented to localization engineering environmentHB Batch Encoder Mk 2: HandBrake Batch Encoder Mk II This Program was adapted from an original project downloaded from codeplex by the name of "Handbrake Batch Encoder"...Integrating Social Media Networks: This is part of my pos graduation project.Ketonic: The Ketonic project aims to improve development of websites based on the Kentico CMS. LinkSharp: LinkSharp is a short-URL provider that can be used to generate short static non changing URL's. The web interface allows you to easily add / edit /...PUC NET (C++ Network Library - PUC Minas): This is an Academic Library for an Easy Development of Applications and Games based on Network Communication.Regular Expression Tester: Small utility for testing regular expressionsSharePoint User Management WebPart: SharePoint User Management WebPartSharpBox: SharpBox makes it easier for .NET developers to interact with existing cloud storage service, e.g. DropBox or Amazon S3Snipivit: Snipivit is a snippet manager service and VS2010 plugin that allows small development teams to store all their code snippets on a central database,...Software Factories Applied: Software Factories Applied is a project collecting the companion bits for the eponymous book to be published by Wiley & Sons in 2011. The authors ...The Ping Master: A service that periodically pings network addresses and allows the running of command line type utilities in response to success or failure.Title Safe Region Checker: A simple utility for XNA developers to check screenshots from games intended for release on the LIVE Marketplace for "title safe" region compliance...Trial project: sky is blueUyghur Named Date: Generate Uyghur named date string. ئۇيغۇرچە ئاي ناملىق چىسلا ھاسىل قىلىشWildcard Search Web Part for SharePoint 2010: The Wildcard Search web part for MOSS 2007 was wildly successful. Although, SharePoint 2010 has built-in wildcard searching functionality, the out...在线Office控件 Online Offical Control: 在线Office控件软件作品发布平台: SoftwarePublishPlatform 软件作品发布平台New ReleasesDemina: Demina Binaries version 0.1: Demina binaries are now available. This release (version 0.1) is an alpha version. Please report any bugs for extermination.EasyTFS: EasyTfs 1.0 Beta 2: Added cache refreshing when contents are updated rather than just every 10 minutes. Added window title based on currently-open case. Added attachme...Extending C# editor - Outlining, classification: Initial release: Initial releaseHB Batch Encoder Mk 2: HB Batch Encoder Mk2 v1.01: Binary release files.HB Batch Encoder Mk 2: Source Code: Source CodeHobbyBrew Mobile: Beta 2: Corretti numerosi bug, data un implementazione "approssimativa" del riscaldamento per Infusione. Aggiornamento consigliato!HouseFly controls: HouseFly controls beta 1.0.2.0: HouseFly controls relase 1.0.2.0 betaHtml Reader: Beta 2: I fixed a bug in HtmlElementCollection, Which exposed an integer enumerator, instead of enumerating through HtmlElements. I added a WPF Window tha...Html to OpenXml: HtmlToOpenXml 1.2: Fix some reported bug. See change set for description. The dll library to include in your project. The dll is signed for GAC support. Compiled wi...Infection Protection: Infection Protection 0.1: This is the final version of Infection Protection that was entered into the 2010 OGPC game competition.Jobping Url Shortener: Deploy Code 0.5.1: Deployment code for Version 0.5 This version includes our Jobping style.Jobping Url Shortener: Source Code 0.5.1: Source code for the 0.5 release. This release includes our Jobping style skin.Kooboo HTML form: Kooboo HTML form module 2.1.0.1: HTML form module contributed by member aledelgo. Add SMTP user and password authentication.KooBoo Image Galery: Beta 2: This new version corrects some issues pointed by Guoqi Zheng Some schema and folders were renamed, so it's better to uninstall the module and remo...MFCMAPI: May 2010 Release: If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. Build: 6.0.0.1020 The 64 bit bu...MVC Turbine: Release 2.1 for MVC2: This RTM contains the same features as v2.0 RTM plus these features: Instance Registration to IServiceLocator You can now add an instance of a typ...NazTek.Extension.Clr4: NazTek.Extension.Clr4 Binary: Binary releaseNazTek.Extension.Clr4: NazTek.Extension.Clr4 Source: Cab with source codeNSIS Autorun: NSIS Autorun 0.1.8: This release includes source code, executable binaries and example materials.Ottawa IT Day: 2010 Source Code and Presentations: During the Ottawa IT Day 2010, some of the presenters shared their code (and some presentations). This release is the culmination of all those effo...PHPWord: PHPWord 0.6.1 Beta: Changelog: Fixed Error when adding a JPEG image and opening in office 2007 Issue #1 Fixed Already defined constant PHPWORD_BASE_PATH Issue #2 F...Rapid Dictionary: Rapid Dictionary Alpha 2.0: Release Notes * Try auto updatable version: http://install.rapiddict.com/index.html Rapid Dictionary Alpha 2.0 includes such functionality: ...Shake - C# Make: Shake v0.1.18: Core changes. Process wrapper class, console logger, etc.SharpBox: SharpBox-Trunk: This is the SharpBox build from the trunk source branch!SharpBox: SharpBox-Trunk-Initial-Source: The initial source code, will be updated from time to timeSpackle.NET: 4.0.0.0 Release: This new drop contains the following A CreateBigInteger() method on SecureRandom to create random BigInteger values. An extension method to prop...StreamInsight example queries, input adapters and output adapters: StreamInsight Examples for V1.0 RTM: Zipped source code.The Ping Master: v0.1.0.0: Early release of The Ping Master for test purposes. Configuration tool is unfinished and does not include an installer.Title Safe Region Checker: Title Safe Region Checker v1.0.0.1: Release 1.0 of Title Safe Region Checker. No known bugs or problems. File is a zipped directory containing the necessary installation files.TortoiseHg: TortoiseHg 1.0.3: This is a bug fix release, we recommend all users upgrade to 1.0.3Usa*Usa Libraly: Smart.Windows.Navigation 0.4: Smart.Windows.Navigation simple navigation library ver 0.4.0. Include Windows Forms & Compact Framework samples. Information - Smart.Windows.Mvc ...VCC: Latest build, v2.1.30513.0: Automatic drop of latest buildWabbitStudio Z80 Software Tools: Wabbitcode: Wabbitcode is an Z80 Assembly IDE for Windows, OS X, and Linux. Built to take full advantage of the features of SPASM and Wabbitemu, Wabbitcode has...white: Release 0.20: Source Code: https://white-project.googlecode.com/svn/tags/0.20 Add few more keyboard keys like windows button and F13-F24. Fixed bugs for keyboar...Wildcard Search Web Part for SharePoint 2010: Version 1.0 Release 1: This is the initial release of the Wildcard Search Web Part for SharePoint 2010. All queries will be issued as wildcards unless disabled with the ...Windows Azure Command-line Tools for PHP Developers: Windows Azure Command-line Tools May 2010 Update: May 2010 Update – May 13, 2010 We are pleased to announced the May 2010 update of Windows Azure Command-Line Tools. In addition to bug fixes and i...WinXmlCook: WinXmlCook 2.1: Version 2.1 released!Xrns2XMod: Xrns2XMod 1.1: some source code optimization在线Office控件 Online Offical Control: SPOffice2.0Release: 该版本在MS Office2003/2007,WPS2009,WPS2010下测试通过Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrBlogEngine.NETPHPExcelMicrosoft Biology FoundationwhiteWindows Azure Command-line Tools for PHP DevelopersStyleCopShake - C# Make

    Read the article

  • SRs @ Oracle: How do I License Thee?

    - by [email protected]
    With the release of the new Sun Ray product last week comes the advent of a different software licensing model. Where Sun had initially taken the approach of '1 desktop device = one license', we later changed things to be '1 concurrent connection to the server software = one license', and while there were ways to tell how many connections there were at a time, it wasn't the easiest thing to do.  And, when should you measure concurrency?  At your busiest time, of course... but when might that be?  9:00 Monday morning this week might yield a different result than 9:00 Monday morning last week.In the acquisition of this desktop virtualization product suite Oracle has changed things to be, in typical Oracle fashion, simpler.  There are now two choices for customers around licensing: Named User licenses and Per Device licenses.Here's how they work, and some examples:The Rules1) A Sun Ray device, and PC running the Desktop Access Client (DAC), are both considered unique devices.OR, 2) Any user running a session on either a Sun Ray or an DAC is still just one user.So, you have a choice of path to go down.Some Examples:Here are 6 use cases I can think of right now that will help you choose the Oracle server software licensing model that is right for your business:Case 1If I have 100 Sun Rays for 100 users, and 20 of them use DAC at home that is 100 user licenses.If I have 100 Sun Rays for 100 users, and 20 of them use DAC at home that is 120 device licenses.Two cases using the same metrics - different licensing models and therefore different results.Case 2If I have 100 Sun Rays for 200 users, and 20 of them use DAC at home that is 200 user licenses.If I have 100 Sun Rays for 200 users, and 20 of them use DAC at home that is 120 device licenses.Same metrics - very different results.Case 3If I have 100 Sun Rays for 50 users, and 20 of them use DAC at home that is 50 user licenses.If I have 100 Sun Rays for 50 users, and 20 of them use DAC at home that is 120 device licenses.Same metrics - but again - very different results.Based on the way your business operates you should be able to see which of the two licensing models is most advantageous to you.Got questions?  I'll try to help.(Thanks to Brad Lackey for the clarifications!)

    Read the article

  • Problem with IIS 6.0 in WOW WCF 4 (.net 4.0)

    - by Kevin
    We just upgraded to WCF 4 on IIS 6 (running in WoW 32 bit mode), and all of a sudden the services started running into what appears to be concurrency problems. Upon finding out we had a problem, we changed the Behavior Configuration Changes on the WCF server to the follow: <serviceThrottling maxConcurrentCalls="1000" maxConcurrentInstances="1000" maxConcurrentSessions="1000" /> We also changed the number of worker processes from 1 to 5. Doing all of this seemed to have no effect. The service seemed to be running, but throttled by something. Is there anything else that might need to be changed to remove the "artificial" throttling? Were using the default configuration WCF which should be Per-Call (not singleton).

    Read the article

  • Reminder: True WCF Asynchronous Operation

    - by Sean Feldman
    A true asynchronous service operation is not the one that returns void, but the one that is marked as IsOneWay=true using BeginX/EndX asynchronous operations (thanks Krzysztof). To support this sort of fire-and-forget invocation, Windows Communication Foundation offers one-way operations. After the client issues the call, Windows Communication Foundation generates a request message, but no correlated reply message will ever return to the client. As a result, one-way operations can't return values, and any exception thrown on the service side will not make its way to the client. One-way calls do not equate to asynchronous calls. When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call. However, once the call is queued, the client is unblocked and can continue executing while the service processes the operation in the background. This usually gives the appearance of asynchronous calls.

    Read the article

  • VS 2010 IDE Features in a nutshell

    - by Rajesh Pillai
    Going through a VS 2010 IDE Features.  We will explore each feature in subsequent posts.  The post are documented as being reviewed by me.   Breakpoint Labeling Breakpoint Searching Breakpoint Import/Export Dynamic Data Tooling WPF Tree Visualizer Call Hierarchy Improved WPF Tooling Historical Debugging Mini-Dump Debugging Quick Search Better Multi-Monitor Support Highlight References Parallel Stacks Window Parallel Tasks Window Document Map Margin Generate from Usage Concurrency Profiler Inline Call Tree Extensible Test Runner MVC Tooling Web Deploy JQuery IntelliSense SharePoint Tooling HTML Snippets Web.config Transformation ClickOnce Enhancements for MS Office     VS is an editor as well as a platform for development and this is only more true with VS 2010.  As an editor there is improved forcus on writing code, understanding code, navigating and publishing code.   VS Shell has been completely rewritten using WPF extending huge benefits.  The start page has been rewritten using XAML, so it is easy to customize.   Support new support for Silverlight, MFC, F# , Azure and extended support for Office 2010, Sharepoint.   Has a good Extension Manager as well.   Enjoy Coding !!!

    Read the article

  • PASS 13 Dispatches: Memory Optimized = On

    - by Tony Davis
    I'm at the PASS Summit in Charlotte for the Day 1 keynote by Quentin Clarke, Corporate VP of the data platform group at Microsoft. He's talking about how SQL Server 2014 is “pushing boundaries” and first up is SQL Server 2014's In-Memory OLTP technology (former codename “hekaton”) It is a feature that provokes a lot of interest and for good reason as, without any need for application rewrites or hardware updates, it can enable us to ensure that an application can find in memory most or all of the data it needs, and can lead to huge improvements in processing times. A good recent hekaton use cases article talks about applications that need a “Shock Absorber” when either spikes or just a high rate of incoming workload (including data in ETL scenarios) become a primary bottleneck. To get a really deep look at this technology, I would check out David DeWitt's summit keynote tomorrow (it will be live streamed). Other than that, to get started I'd recommend Kalen Delaney's whitepaper. She offers a lot of insight into how it works and how to start to define memory-optimized tables, and natively compiled stored procedures. These memory-optimized tables uses completely optimistic multi-version concurrency control – no waiting on locks! After that, Tom LaRock has compiled a useful set of links to drill deeper, and includes one to Microsoft's AMR tool to help you gauge the tables that might benefit most. Tony.

    Read the article

  • C# via Java: Introduction

    - by simonc
    Originally posted on: http://geekswithblogs.net/simonc/archive/2013/11/08/c-via-java-introduction.aspxSo, I've recently changed jobs. Rather than working in .NET land, I've migrated over to Java land. But never fear! I'll continue to peer under the covers of .NET, but my next series will use my new experience in Java to explore the design decisions made in the development of the C# programming language. After all, the design of C# was based on Java 1.2, and both languages have continued to evolve since then, incorporating modern software engineering concepts and requirements. Exploring the differences and similarities between the two will (hopefully) give us a deeper understanding into why .NET is implemented the way it is, the trade-offs involved, and what choices were made when new features were designed and added to the language and framework. Among others, I'll be looking at differences in: Primitives Operators Generics Exceptions Accessibility Collections Delegates and inner classes Concurrency In my next post, I'll start off by looking at the type primitives available in each language, and how Java and C# actually incorporate two different concepts of primitive types in their fundamental language design and use. I'm also thinking of looking at the inner details of Java and the JVM in my blogs, as well as C# and the CLR. If you've got any comments or thoughts on this, please let me know.

    Read the article

  • C# via Java: Introduction

    - by Simon Cooper
    So, I’ve recently changed jobs. Rather than working in .NET land, I’ve migrated over to Java land. But never fear! I’ll continue to peer under the covers of .NET, but my next series will use my new experience in Java to explore the design decisions made in the development of the C# programming language. After all, the design of C# was based on Java 1.2, and both languages have continued to evolve since then, incorporating modern software engineering concepts and requirements. Exploring the differences and similarities between the two will (hopefully) give us a deeper understanding into why .NET is implemented the way it is, the trade-offs involved, and what choices were made when new features were designed and added to the language and framework. Among others, I’ll be looking at differences in: Primitives Operators Generics Exceptions Accessibility Collections Delegates and inner classes Concurrency In my next post, I’ll start off by looking at the type primitives available in each language, and how Java and C# actually incorporate two different concepts of primitive types in their fundamental language design and use. I’m also thinking of looking at the inner details of Java and the JVM in my blogs, as well as C# and the CLR. If you’ve got any comments or thoughts on this, please let me know.

    Read the article

  • Configuring jdbc-pool (tomcat 7)

    - by john
    i'm having some problems with tomcat 7 for configuring jdbc-pool : i`ve tried to follow this example: http://www.tomcatexpert.com/blog/2010/04/01/configuring-jdbc-pool-high-concurrency so i have: conf/server.xml <GlobalNamingResources> <Resource type="javax.sql.DataSource" name="jdbc/DB" factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/mydb" username="user" password="password" /> </GlobalNamingResources> conf/context.xml <Context> <ResourceLink type="javax.sql.DataSource" name="jdbc/LocalDB" global="jdbc/DB" /> <Context> and when i try to do this: Context initContext = new InitialContext(); Context envContext = (Context)initContext.lookup("java:/comp/env"); DataSource datasource = (DataSource)envContext.lookup("jdbc/LocalDB"); Connection con = datasource.getConnection(); i keep getting this error: javax.naming.NameNotFoundException: Name jdbc is not bound in this Context at org.apache.naming.NamingContext.lookup(NamingContext.java:803) at org.apache.naming.NamingContext.lookup(NamingContext.java:159) pls help tnx

    Read the article

  • Announcing Berkeley DB Java Edition Major Release

    - by Eric Jensen
    Berkeley DB Java Edition 5.0 was just released. There are a number of new features, enhancements, and options in there that our users have been asking for. Chief among them is a new class called DiskOrderedCursor, which greatly increases performance of systems using spinning platter magnetic hard drives. A number of users expressed interest in this feature, including Alex Feinberg of LinkedIn. Berkeley DB Java Edition is part of Project Voldemort, a distributed key/value database used by LinkedIn. There have been many other improvements and optimizations. Concurrency is significantly improved, as is the performance of update and delete operations. New and interesting methods include Environment.preload, which allows multiple databases to be preloaded simultaneously. New Cursor methods enable for more effective searching through the database. We continue to enhance Berkeley DB Java Edition’s High Availability as well. One new feature is the ability to open a replicated node read-only when the master is unavailable. This can allow critical systems to continue offering some functionality, even during a network or master node failure. There’s a lot more in release 5.0. I encourage you to take a look at the extensive changelog yourself. As always, you can download the new release and try it out here: http://www.oracle.com/technetwork/database/berkeleydb/downloads/index.html

    Read the article

  • Parallel Computing Platform Developer Lab

    - by Daniel Moth
    This is an exciting announcement that I must share: "Microsoft Developer & Platform Evangelism, in collaboration with the Microsoft Parallel Computing Platform product team, is hosting a developer lab at the Platform Adoption Center on April 12-15, 2010.  This event is for Microsoft Partners and Customers seeking to incorporate either .NET Framework 4 or Visual C++ 2010 parallelism features into their new or existing applications, and to gain expertise with new Visual Studio 2010 tools including the Parallel Tasks and Parallel Stacks debugger toolwindows, and the Concurrency Visualizer in the profiler. Opportunities for attendees include: Gain expert design assistance with your Parallel Computing Platform based solution. Develop a solution prototype in collaboration with Microsoft Software Engineers. Attend topical presentations and “chalk-talk” sessions. Your team will be assigned private, secure offices for confidential collaboration activities. The event has limited capacity, thus enrollment is based on an application process.   Please download and complete the application form then return it to the event management team per instructions included within the form.  Applications will be evaluated based upon the technical solution scenario along with indicated project readiness timelines.  Microsoft event management team members may contact you directly for additional clarification and discussion of your project scenario during the nomination process." Comments about this post welcome at the original blog.

    Read the article

  • AIOUG TechDay @ Lovely Professional University, Jalandhar, India

    - by Tori Wieldt
    by guest blogger Jitendra Chittoda, co-leader, Delhi and NCR JUG On 30 August 2013, Lovely Professional University (LPU) Jalandhar organized an All India Oracle User Group (AIOUG) TechDay event on Oracle and Java. This was a full day event with various sessions on J2EE 6, Java Concurrency, NoSQL, MongoDB, Oracle 12c, Oracle ADF etc. It was an overwhelming response from students, auditorium was jam packed with 600+ LPU energetic students of B.Tech and MCA stream. Navein Juneja Sr. Director LPU gave the keynote and introduced the speakers of AIOUG and Delhi & NCR Java User Group (DaNJUG). Mr. Juneja explained about the LPU and its students. He explained how Oracle and Java is most used and accepted technologies in world. Rohit Dhand Additional Dean LPU came on stage and share about how his career started with Oracle databases. He encouraged students to learn these technologies and build their career. Satyendra Kumar vice-president AIOUG thanked LPU and their stuff for organizing such a good technical event and students for their overwhelming response.  He talked about the India Oracle group and its events at various geographical locations all over India. Jitendra Chittoda Co-Leader DaNJUG explained how to make a new Java User Groups (JUG), what are its benefits and how to promote it. He explained how the Indian JUGs are contributing to the different initiatives like Adopt-a-JSR and Adopt-OpenJDK. After the inaugural address event started with two different tracks one for Oracle Database and another for Java and its related technologies. Speakers: Satyendra Kumar Pasalapudi (Co-founder and Vice President of AIOUG) Aman Sharma (Oracle Database Consultant and Instructor) Shekhar Gulati (OpenShift Developer Evangelist at RedHat) Rohan Walia (Oracle ADF Consultant at Oracle) Jitendra Chittoda (Co-leader Delhi & NCR JUG and Senior Developer at ION Trading)

    Read the article

  • New Release of Oracle Berkeley DB

    - by Eric Jensen
    We are pleased to announce that a new release of Oracle Berkeley DB, version 11.2.5.2.28, is available today. Our latest release includes yet more value added features for SQLite users, as well as several performance enhancements and new customer-requested features to the key-value pair API.  We continue to provide technology leadership, features and performance for SQLite applications.  This release introduces additional features that are not available in native SQLite, and adds functionality allowing customers to create richer, more scalable, more concurrent applications using the Berkeley DB SQL API. This release is compelling to Oracle’s customers and partners because it: delivers a complete, embeddable SQL92 database as a library under 1MB size drop-in API compatible with SQLite version 3 no-oversight, zero-touch database administration industrial quality, battle tested Berkeley DB B-TREE for concurrent transactional data storage New Features Include: MVCC support for even higher concurrency direct SQL support for HA/replication transactionally protected Sequence number generation functions lower memory requirements, shared memory regions and faster/smaller memory on startup easier B-TREE page size configuration with new ''db_tuner" utility New Key-Value API Features Include: HEAP access method for constrained disk-space applications (key-value API) faster QUEUE access method operations for highly concurrent applications -- up 2-3X faster! (key-value API) new X/open compliant XA resource manager, easily integrated with Oracle Tuxedo (key-value API) additional HA/replication management and communication options (key-value API) and a lot more! BDB is hands-down the best edge, mobile, and embedded database available to developers. Downloads available today on the Berkeley DB download pageProduct Documentation

    Read the article

  • New Packt Books: APEX & JRockit

    - by [email protected]
      I have received these 2 ebooks from Packt Publkishing and I am currently reviewing them. Both of them look great so far.   Oracle Application Express 3.2 - The Essentials and More First of all, I have to mention that I am new to APEX. I was interested on this product which is a development tool for Web applications on the Oracle Database. As I support JDeveloper and ADF, which are products that work very closely with the Oracle Database and are a rapid development tool as well, it is always interesting and useful to know complementary tools. APEX looks very useful and the book includes many working examples. A more complete review of this book is coming soon. Further information about this book can be seen at Packt.   Oracle JRockit: The Definitive Guide Many of our Oracle Coherence customers run their caches and clusters using JRockit. This JVM has helped us to solve lots of Service Requests. It is a really reliable, fast and stable JVM. It works great on both development and production environments with big amounts of data, concurrency, multi-threading and many other factors that can make a JVM crash. I must also mention JRockit Mission Control (JRMC), which is a great tool for management and monitoring. I really recommend it. As a matter of fact, some months ago, I created a document entitled "How to Monitor Coherence-Based Applications using JRockit Mission Control" (Doc Id 961617.1) on My Oracle Support. Also, the JRockit Runtime Analyzer (JRA) and it successor of newer versions, the JRockit Flight Recorder (JFR) are deeply reviewed. This book contains very clear and complete information about all this and more. I will post an entry with a more complete review soon (and will probably post an entry about Coherence monitoring with JRMC soon too). Further information about this book can be seen at Packt.  

    Read the article

  • The Java Specialist: An Interview with Java Champion Heinz Kabutz

    - by Janice J. Heiss
    Dr. Heinz Kabutz is well known for his Java Specialists’ Newsletter, initiated in November 2000, where he displays his acute grasp of the intricacies of the Java platform for an estimated 70,000 readers; for his work as a consultant; and for his workshops and trainings at his home on the Island of Crete where he has lived since 2006 -- where he is known to curl up on the beach with his laptop to hack away, in between dips in the Mediterranean. Kabutz was born of German parents and raised in Cape Town, South Africa, where he developed a love of programming in junior high school through his explorations on a ZX Spectrum computer. He received a B.S. from the University of Cape Town, and at 25, a Ph.D., both in computer science. He will be leading a two-hour hands-on lab session, HOL6500 – “Finding and Solving Java Deadlocks,” at this year’s JavaOne that will explore what causes deadlocks and how to solve them. Q: Tell us about your JavaOne plans.A: I am arriving on Sunday evening and have just one hands-on-lab to do on Monday morning. This is the first time that a non-Oracle team is doing a HOL at JavaOne under Oracle's stewardship and we are all a bit nervous about how it will turn out. Oracle has been immensely helpful in getting us set up. I have a great team helping me: Kirk Pepperdine, Dario Laverde, Benjamin Evans and Martijn Verburg from jClarity, Nathan Reynolds from Oracle, Henri Tremblay of OCTO Technology and Jeff Genender of Savoir Technologies. Monday will be hard work, but after that, I will hopefully get to network with fellow Java experts, attend interesting sessions and just enjoy San Francisco. Oh, and my kids have already given me a shopping list of things to get, like a GoPro Hero 2 dive housing for shooting those nice videos of Crete. (That's me at the beginning diving down.) Q: What sessions are you attending that we should know about?A: Sometimes the most unusual sessions are the best. I avoid the "big names". They often are spread too thin with all their sessions, which makes it difficult for them to deliver what I would consider deep content. I also avoid entertainers who might be good at presenting but who do not say that much.In 2010, I attended a session by Vladimir Yaroslavskiy where he talked about sorting. Although he struggled to speak English, what he had to say was spectacular. There was hardly anybody in the room, having not heard of Vladimir before. To me that was the highlight of 2010. Funnily enough, he was supposed to speak with Joshua Bloch, but if you remember, Google cancelled. If Bloch has been there, the room would have been packed to capacity.Q: Give us an update on the Java Specialists’ Newsletter.A: The Java Specialists' Newsletter continues being read by an elite audience around the world. The apostrophe in the name is significant.  It is a newsletter for Java specialists. When I started it twelve years ago, I was trying to find non-obvious things in Java to write about. Things that would be interesting to an advanced audience.As an April Fool's joke, I told my readers in Issue 44 that subscribing would remain free, but that they would have to pay US$5 to US$7 depending on their geographical location. I received quite a few angry emails from that one. I would have not earned that much from unsubscriptions. Most readers stay for a very long time.After Oracle bought Sun, the Java community held its breath for about two years whilst Oracle was figuring out what to do with Java. For a while, we were quite concerned that there was not much progress shown by Oracle. My newsletter still continued, but it was quite difficult finding new things to write about. We have probably about 70,000 readers, which is quite a small number for a Java publication. However, our readers are the top in the Java industry. So I don't mind having "only" 70000 readers, as long as they are the top 0.7%.Java concurrency is a very important topic that programmers think they should know about, but often neglect to fully understand. I continued writing about that and made some interesting discoveries. For example, in Issue 165, I showed how we can get thread starvation with the ReadWriteLock. This was a bug in Java 5, which was corrected in Java 6, but perhaps a bit too much. Whereas we could get starvation of writers in Java 5, in Java 6 we could now get starvation of readers. All of these interesting findings make their way into my courseware to help companies avoid these pitfalls.Another interesting discovery was how polymorphism works in the Server HotSpot compiler in Issue 157 and Issue 158. HotSpot can inline methods from interfaces that have only one implementation class in the JVM. When a new subclass is instantiated and called for the first time, the JVM will undo the previous optimization and re-optimize differently.Here is a little memory puzzle for your readers: public class JavaMemoryPuzzle {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzle jmp = new JavaMemoryPuzzle();    jmp.f();  }}When you run this you will always get an OutOfMemoryError, even though the local variable data is no longer visible outside of the code block.So here comes the puzzle, that I'd like you to ponder a bit. If you very politely ask the VM to release memory, then you don't get an OutOfMemoryError: public class JavaMemoryPuzzlePolite {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    for(int i=0; i<10; i++) {      System.out.println("Please be so kind and release memory");    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzlePolite jmp = new JavaMemoryPuzzlePolite();    jmp.f();    System.out.println("No OutOfMemoryError");  }}Why does this work? When I published this in my newsletter, I received over 400 emails from excited readers around the world, most of whom sent me the wrong explanation. After the 300th wrong answer, my replies became unfortunately a bit curt. Have a look at Issue 174 for a detailed explanation, but before you do, put on your thinking caps and try to figure it out yourself. Q: What do you think Java developers should know that they currently do not know?A: They should definitely get to know more about concurrency. It is a tough subject that most programmers try to avoid. Unfortunately we do come in contact with it. And when we do, we need to know how to protect ourselves and how to solve tricky system errors.Knowing your IDE is also useful. Most IDEs have a ton of shortcuts, which can make you a lot more productive in moving code around. Another thing that is useful is being able to read GC logs. Kirk Pepperdine has a great talk at JavaOne that I can recommend if you want to learn more. It's this: CON5405 – “Are Your Garbage Collection Logs Speaking to You?” Q: What are you looking forward to in Java 8?A: I'm quite excited about lambdas, though I must confess that I have not studied them in detail yet. Maurice Naftalin's Lambda FAQ is quite a good start to document what you can do with them. I'm looking forward to finding all the interesting bugs that we will now get due to lambdas obscuring what is really going on underneath, just like we had with generics.I am quite impressed with what the team at Oracle did with OpenJDK's performance. A lot of the benchmarks now run faster.Hopefully Java 8 will come with JSR 310, the Date and Time API. It still boggles my mind that such an important API has been left out in the cold for so long.What I am not looking forward to is losing perm space. Even though some systems run out of perm space, at least the problem is contained and they usually manage to work around it. In most cases, this is due to a memory leak in that region of memory. Once they bundle perm space with the old generation, I predict that memory leaks in perm space will be harder to find. More contracts for us, but also more pain for our customers. Originally published on blogs.oracle.com/javaone.

    Read the article

  • The Java Specialist: An Interview with Java Champion Heinz Kabutz

    - by Janice J. Heiss
    Dr. Heinz Kabutz is well known for his Java Specialists’ Newsletter, initiated in November 2000, where he displays his acute grasp of the intricacies of the Java platform for an estimated 70,000 readers; for his work as a consultant; and for his workshops and trainings at his home on the Island of Crete where he has lived since 2006 -- where he is known to curl up on the beach with his laptop to hack away, in between dips in the Mediterranean. Kabutz was born of German parents and raised in Cape Town, South Africa, where he developed a love of programming in junior high school through his explorations on a ZX Spectrum computer. He received a B.S. from the University of Cape Town, and at 25, a Ph.D., both in computer science. He will be leading a two-hour hands-on lab session, HOL6500 – “Finding and Solving Java Deadlocks,” at this year’s JavaOne that will explore what causes deadlocks and how to solve them. Q: Tell us about your JavaOne plans.A: I am arriving on Sunday evening and have just one hands-on-lab to do on Monday morning. This is the first time that a non-Oracle team is doing a HOL at JavaOne under Oracle's stewardship and we are all a bit nervous about how it will turn out. Oracle has been immensely helpful in getting us set up. I have a great team helping me: Kirk Pepperdine, Dario Laverde, Benjamin Evans and Martijn Verburg from jClarity, Nathan Reynolds from Oracle, Henri Tremblay of OCTO Technology and Jeff Genender of Savoir Technologies. Monday will be hard work, but after that, I will hopefully get to network with fellow Java experts, attend interesting sessions and just enjoy San Francisco. Oh, and my kids have already given me a shopping list of things to get, like a GoPro Hero 2 dive housing for shooting those nice videos of Crete. (That's me at the beginning diving down.) Q: What sessions are you attending that we should know about?A: Sometimes the most unusual sessions are the best. I avoid the "big names". They often are spread too thin with all their sessions, which makes it difficult for them to deliver what I would consider deep content. I also avoid entertainers who might be good at presenting but who do not say that much.In 2010, I attended a session by Vladimir Yaroslavskiy where he talked about sorting. Although he struggled to speak English, what he had to say was spectacular. There was hardly anybody in the room, having not heard of Vladimir before. To me that was the highlight of 2010. Funnily enough, he was supposed to speak with Joshua Bloch, but if you remember, Google cancelled. If Bloch has been there, the room would have been packed to capacity.Q: Give us an update on the Java Specialists’ Newsletter.A: The Java Specialists' Newsletter continues being read by an elite audience around the world. The apostrophe in the name is significant.  It is a newsletter for Java specialists. When I started it twelve years ago, I was trying to find non-obvious things in Java to write about. Things that would be interesting to an advanced audience.As an April Fool's joke, I told my readers in Issue 44 that subscribing would remain free, but that they would have to pay US$5 to US$7 depending on their geographical location. I received quite a few angry emails from that one. I would have not earned that much from unsubscriptions. Most readers stay for a very long time.After Oracle bought Sun, the Java community held its breath for about two years whilst Oracle was figuring out what to do with Java. For a while, we were quite concerned that there was not much progress shown by Oracle. My newsletter still continued, but it was quite difficult finding new things to write about. We have probably about 70,000 readers, which is quite a small number for a Java publication. However, our readers are the top in the Java industry. So I don't mind having "only" 70000 readers, as long as they are the top 0.7%.Java concurrency is a very important topic that programmers think they should know about, but often neglect to fully understand. I continued writing about that and made some interesting discoveries. For example, in Issue 165, I showed how we can get thread starvation with the ReadWriteLock. This was a bug in Java 5, which was corrected in Java 6, but perhaps a bit too much. Whereas we could get starvation of writers in Java 5, in Java 6 we could now get starvation of readers. All of these interesting findings make their way into my courseware to help companies avoid these pitfalls.Another interesting discovery was how polymorphism works in the Server HotSpot compiler in Issue 157 and Issue 158. HotSpot can inline methods from interfaces that have only one implementation class in the JVM. When a new subclass is instantiated and called for the first time, the JVM will undo the previous optimization and re-optimize differently.Here is a little memory puzzle for your readers: public class JavaMemoryPuzzle {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzle jmp = new JavaMemoryPuzzle();    jmp.f();  }}When you run this you will always get an OutOfMemoryError, even though the local variable data is no longer visible outside of the code block.So here comes the puzzle, that I'd like you to ponder a bit. If you very politely ask the VM to release memory, then you don't get an OutOfMemoryError: public class JavaMemoryPuzzlePolite {  private final int dataSize =      (int) (Runtime.getRuntime().maxMemory() * 0.6);  public void f() {    {      byte[] data = new byte[dataSize];    }    for(int i=0; i<10; i++) {      System.out.println("Please be so kind and release memory");    }    byte[] data2 = new byte[dataSize];  }  public static void main(String[] args) {    JavaMemoryPuzzlePolite jmp = new JavaMemoryPuzzlePolite();    jmp.f();    System.out.println("No OutOfMemoryError");  }}Why does this work? When I published this in my newsletter, I received over 400 emails from excited readers around the world, most of whom sent me the wrong explanation. After the 300th wrong answer, my replies became unfortunately a bit curt. Have a look at Issue 174 for a detailed explanation, but before you do, put on your thinking caps and try to figure it out yourself. Q: What do you think Java developers should know that they currently do not know?A: They should definitely get to know more about concurrency. It is a tough subject that most programmers try to avoid. Unfortunately we do come in contact with it. And when we do, we need to know how to protect ourselves and how to solve tricky system errors.Knowing your IDE is also useful. Most IDEs have a ton of shortcuts, which can make you a lot more productive in moving code around. Another thing that is useful is being able to read GC logs. Kirk Pepperdine has a great talk at JavaOne that I can recommend if you want to learn more. It's this: CON5405 – “Are Your Garbage Collection Logs Speaking to You?” Q: What are you looking forward to in Java 8?A: I'm quite excited about lambdas, though I must confess that I have not studied them in detail yet. Maurice Naftalin's Lambda FAQ is quite a good start to document what you can do with them. I'm looking forward to finding all the interesting bugs that we will now get due to lambdas obscuring what is really going on underneath, just like we had with generics.I am quite impressed with what the team at Oracle did with OpenJDK's performance. A lot of the benchmarks now run faster.Hopefully Java 8 will come with JSR 310, the Date and Time API. It still boggles my mind that such an important API has been left out in the cold for so long.What I am not looking forward to is losing perm space. Even though some systems run out of perm space, at least the problem is contained and they usually manage to work around it. In most cases, this is due to a memory leak in that region of memory. Once they bundle perm space with the old generation, I predict that memory leaks in perm space will be harder to find. More contracts for us, but also more pain for our customers.

    Read the article

  • Tab Sweep - Upgrade to Java EE 6, Groovy NetBeans, JSR310, JCache interview, OEPE, and more

    - by alexismp
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Implementing JSR 310 (New Date/Time API) in Java 8 Is Very Strongly Favored by Developers (java.net) • Upgrading To The Java EE 6 Web Profile (Roger) • NetBeans for Groovy (blogs.oracle.com) • Client Side MOXy JSON Binding Explained (Blaise) • Control CDI Containers in SE and EE (Strub) • Java EE on Google App Engine: CDI to the Rescue - Aleš Justin (jaxenter) • The Java EE 6 Example - Testing Galleria - Part 4 (Markus) • Why is OpenWebBeans so fast? (Strub) • Welcome to the new Oracle Enterprise Pack for Eclipse Blog (blogs.oracle.com) • Java Spotlight Episode 75: Greg Luck on JSR 107 Java Temporary Caching API (Spotlight Podcast) • Glassfish cluster installation and administration on top of SSH + public key (Paulo) • Jfokus 2012 on Parleys.com (Parleys) • Java Tuning in a Nutshell - Part 1 (Rupesh) • New Features in Fork/Join from Java Concurrency Master, Doug Lea (DZone) • A Java7 Grammar for VisualLangLab (Sanjay) • Glassfish version 3.1.2: Secure Admin must be enabled to access the DAS remotely (Charlee) • Oracle Announces the Certification of the Oracle Database on Oracle Linux 6 and Red Hat Enterprise Linux 6

    Read the article

  • What are some arguments AGAINST using EntityFramework?

    - by Rachel
    The application I am currently building has been using Stored procedures and hand-crafted class models to represent database objects. Some people have suggested using Entity Framework and I am considering switching to that since I am not that far into the project. My problem is, I feel the people arguing for EF are only telling me the good side of things, not the bad side :) My main concerns are: We want Client-Side validation using DataAnnotations, and it sounds like I have to create the client-side models anyways so I am not sure that EF would save that much coding time We would like to keep the classes as small as possible when going over the network, and I have read that using EF often includes extra data that is not needed We have a complex database layer which crosses multiple databases, and I am not sure EF can handle this. We have one Common database with things like Users, StatusCodes, Types, etc and multiple instances of our main databases for different instances of the application. SELECT queries can and will query across all instances of the databases, however users can only modify objects that are in the database they are currently working on. They can switch databases without reloading the application. Object modes are very complex and there are often quite a few joins involved Arguments for EF are: Concurrency. I wouldn't have to code in checks to see if the record was updated before each save Code Generation. EF can generate partial class models and POCOs for me, however I am not positive this would really save me that much time since I think we would still need to create the client-side models for validation and some custom parsing methods. Speed of development since we wouldn't need to create the CRUD stored procedures for every database object Our current architecture consists of a WPF Service which handles database calls via parameterized Stored Procedures, POCO objects that go to/from the WCF service and the WPF client, and the WPF client itself which transforms POCOs into class Models for the purpose of Validation and DataBinding.

    Read the article

  • How accurate is apache benchmark?

    - by matthewsteiner
    Alright, so I'm in development right now and I'd like to understand exactly how good the benchmarks are. I've just been using apache benchmark. Do they include the server sending the files? Also, is "requests per second" literally how many users can visit the page within one second? If it's at 30 requests per second, can literally 30 people be refreshing pages every second and the server will be fine? It seems like a lot to me. I know a lot of people get way better stats out of their servers, but I haven't done much optimization yet. Also, will increasing your ram increase you rps linearly? I have 512mb, so if I upgrade to 1gb, would that mean I'd get about 60 rps? How does concurrency affect your rps?

    Read the article

  • Java Spotlight Episode 89: Geoff Morton on Java Embedded

    - by Roger Brinkley
    Interview with Geoff Morton, Group Vice President, Worldwide Java Sales at Oracle , on Java embedded. Joining us this week on the Java All Star Developer Panel are Dalibor Topic, Java Free and Open Source Software Ambassador and Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News EclipseLink 2.4 Hands-on FREE GlassFish Course NetBeans IDE 7.2 RC1 Hamish Morrison: OpenJDK Haiku port: quarter term report Proposed Update to the OpenJDK Web Site Terms of Use JavaOne Embedded Oracle Java ME Embedded Client (OJEC) 1.1 release on OTN New Videos Understanding the JVM and Low Latency Applications 55 New Things in Java 7 - Concurrency Events July 5, Java Forum, Stuttgart, Germany Jul 12, Java EE 6 workshop at Mindtree, Bangalore Jul 13-14, IndicThreads, Delhi July 30-August 1, JVM Language Summit, Santa Clara Feature InterviewGeoff Morton is the Group Vice President, Worldwide Java Sales at Oracle. Mail Bag What’s Cool Duke’s Choice Awards decision is going on Java Champions Facebook Page Joe Darcy: Moving monarchs and dragons: migrating the JDK bugs to JIRA Mike Duigou: Updated Lambda Binary Drops Mark Reinhold: Mercurial "jcheck" extension now available

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.”

    Read the article

  • Pattern for Accessing MySQL connection

    - by Dipan Mehta
    We have an application which is C++ trying to access MySQL database. There are several (about 5 or so) threads in the application (with Boost library for threading) and in each thread has a few objects, each of which is trying to access Database for its' own purpose. It has a simple ORM kind of model but that really is not an important factor here. There are three potential access patterns i can think of: There could be single connection object per application or thread and is shared between all (or group). The object needs to be thread safe and there will be contentions but MySQL will not be fired with too many connections. Every object could initiate connection on its own. The database needs to take care of concurrency (which i think MySQL can) and the design could be much simpler. There could be two possibilities here. a. either object keeps a persistent connection for its life OR b. object initiate connection as and when needed. To simplify the contention as in case of 1 and not to create too many sockets as in case of 2, we can have group/set based connections. So there could be there could be more than one connection (say N), each of this connection could be shared connection across M objects. Naturally, each of the pattern has different resource cost and would work under different constraints and objectives. What criteria should i use to choose the pattern of this for my own application? What are some of the advantages and disadvantages of each of these pattern over the other? Are there any other pattern which is better? PS: I have been through these questions: mysql, one connection vs multiple and MySQL with mutiple threads and processes But they don't quite answer exactly what i am trying to ask.

    Read the article

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.” Originally published on blogs.oracle.com/javaone.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >