Search Results

Search found 9647 results on 386 pages for 'cross compile'.

Page 82/386 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Java Exceptions

    - by Mandar
    This may sound awkward ... But I didn't understand it. Why do we have compile-time error and not compile-time exception in java ? I mean to say that we never say compile-time exception. We tend to say it as compile-time error. Is there any specific reason for the same ?? Any suggestions are welcomed.... Thanks !

    Read the article

  • I have a problem wit the following code of java

    - by Sanjeev
    public class b { public static void main(String[] args) { byte b = 1; long l = 127; // b = b + l; // 1 if I try this then its not compile b += l; // 2 if I try this its compile System.out.println(b); } } I don't understand why b=b+1; is not compile and if I write b+=l; then it compile an run.

    Read the article

  • script to recursively check for and select dependencies

    - by rp.sullivan
    I have written a script that does this but it is one of my first scripts ever so i am sure there is a better way:) Let me know how you would go about doing this. I'm looking for a simple yet efficient way to do this. Here is some important background info: ( It might be a little confusing but hopefully by the end it will make sense. ) 1) This image shows the structure/location of the relevant dirs and files. 2) The packages.file located at ./config/default/config/packages is a space delimited file. field5 is the "package name" which i will call $a for explanations sake. field4 is the name of the dir containing the $a.dir i will call $b field1 shows if the package is selected or not, "X"(capital x) for selected and "O"(capital o as in orange) for not selected. Here is an example of what the packages.file might contain: ... X ---3------ 104.800 database gdbm 1.8.3 / base/library CROSS 0 O -1---5---- 105.000 base libiconv 1.13.1 / base/tool CROSS 0 X 01---5---- 105.000 base pkgconfig 0.25 / base/tool CROSS 0 X -1-3------ 105.000 base texinfo 4.13a / base/tool CROSS DIETLIBC 0 O -----5---- 105.000 develop duma 2_5_15 / base/development CROSS NOPARALLEL 0 O -----5---- 105.000 develop electricfence 2_4_13 / base/development CROSS 0 O -----5---- 105.000 develop gnupth 2.0.7 / extra/development CROSS NOPARALLEL FPIC-QUIRK 0 ... 3) For almost every package listed in the "packages.file" there is a corresponding ".cache file" The .cache file for package $a would be located at ./package/$b/$a/$a.cache The .cache files contain a list of dependencies for that particular package. Here is an example of one of the .cache files might look like. Note that the dependencies are field2 of lines containing "[DEP]" These dependencies are all names of packages in the "package.file" [TIMESTAMP] 1134178701 Sat Dec 10 02:38:21 2005 [BUILDTIME] 295 (9) [SIZE] 11.64 MB, 191 files [DEP] 00-dirtree [DEP] bash [DEP] binutils [DEP] bzip2 [DEP] cf [DEP] coreutils ... So with all that in mind... I'm looking for a shell script that: From within the "main dir" Looks at the ./config/default/config/packages file and finds the "selected" packages and reads the corresponding .cache Then compiles a list of dependencies that excludes the already selected packages Then selects the dependencies (by changing field1 to X) in the ./config/default/config/packages file and repeats until all the dependencies are met Note: The script will ultimately end up in the "scripts dir" and be called from the "main dir". If this is not clear let me know what need clarification. For those interested I'm playing around with T2 SDE. If you are into playing around with linux it might be worth taking a look.

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to get maven gwt 2.0 build working

    - by Pieter Breed
    EDIT: Added some of the output of the mvn -X -e commands at the end My company is developing a GWT application. We've been using maven 2 and GWT 1.7 successfully for quite a while. We recently decided to upgrade to GWT 2.0. We've already updated the eclipse project and we are able to successfully run the application in dev-mode. We are struggling to get the application built using maven though. I'm hoping somebody can tell me what I'm doing wrong here since I'm running out of time on this. The exacty bit of the output that worries me is the 'GWT compilation skipped' message: [INFO] Copying 119 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Compiling 704 source files to K:\iCura\assessor\target\classes [INFO] [gwt:compile {execution: default}] [INFO] using GWT jars for specified version 2.0.0 [INFO] establishing classpath list (scope = compile) [INFO] com.curasoftware.assessor.Assessor is up to date. GWT compilation skipped [INFO] [jspc:compile {execution: jspc}] [INFO] Built File: \index.jsp I'm pasting the gwt-maven-plugin section below. If you need anything else please ask. <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <version>1.2</version> <configuration> <localWorkers>1</localWorkers> <warSourceDirectory>${basedir}/war</warSourceDirectory> <logLevel>ALL</logLevel> <module>${cura.assessor.module}</module> <!-- use style OBF for prod --> <style>OBFUSCATED</style> <extraJvmArgs>-Xmx2048m -Xss1024k</extraJvmArgs> <gwtVersion>${version.gwt}</gwtVersion> <disableCastChecking>true</disableCastChecking> <soyc>false</soyc> </configuration> <executions> <execution> <goals> <!-- plugin goals --> <goal>clean</goal> <goal>compile</goal> </goals> </execution> </executions> </plugin> I executed mvn clean install -X -e and this is some of the output that I get: [DEBUG] Configuring mojo 'org.codehaus.mojo:gwt-maven-plugin:1.2:compile' --> [DEBUG] (f) disableCastChecking = true [DEBUG] (f) disableClassMetadata = false [DEBUG] (f) draftCompile = false [DEBUG] (f) enableAssertions = false [DEBUG] (f) extra = K:\iCura\assessor\target\extra [DEBUG] (f) extraJvmArgs = -Xmx2048m -Xss1024k [DEBUG] (f) force = false [DEBUG] (f) gen = K:\iCura\assessor\target\.generated [DEBUG] (f) generateDirectory = K:\iCura\assessor\target\generated-sources\gwt [DEBUG] (f) gwtVersion = 2.0.0 [DEBUG] (f) inplace = false [DEBUG] (f) localRepository = Repository[local|file://K:/iCura/lib] [DEBUG] (f) localWorkers = 1 [DEBUG] (f) logLevel = ALL [DEBUG] (f) module = com.curasoftware.assessor.Assessor [DEBUG] (f) project = MavenProject: com.curasoftware.assessor:assessor:3.5.0.0 @ K:\iCura\assessor\pom.xml [DEBUG] (f) remoteRepositories = [Repository[gwt-maven|http://gwt-maven.googlecode.com/svn/trunk/mavenrepo/], Repository[main-maven|http://www.ibiblio.org/maven2/], Repository[central|http://repo1.maven.org/maven2]] [DEBUG] (f) skip = false [DEBUG] (f) sourceDirectory = K:\iCura\assessor\src [DEBUG] (f) soyc = false [DEBUG] (f) style = OBFUSCATED [DEBUG] (f) treeLogger = false [DEBUG] (f) validateOnly = false [DEBUG] (f) warSourceDirectory = K:\iCura\assessor\war [DEBUG] (f) webappDirectory = K:\iCura\assessor\target\assessor [DEBUG] -- end configuration -- and then this: [DEBUG] SOYC has been disabled by user [DEBUG] GWT module com.curasoftware.assessor.Assessor found in K:\iCura\assessor\src [INFO] com.curasoftware.assessor.Assessor is up to date. GWT compilation skipped [DEBUG] com.curasoftware.assessor:assessor:war:3.5.0.0 (selected for null) [DEBUG] com.curasoftware.dto:dto-gen:jar:3.5.0.0:compile (selected for compile) ... It's finding the correct sourceDirectory. That folders has a 'com' folder within which ultimately is the source of the application organized in folders as per the package structure.

    Read the article

  • Passing a list of files to javac

    - by Robert Menteer
    How can I get the javac task to use an existing fileset? In my build.xml I have created several filesets to be used in multiple places throughout build file. Here is how they have been defined: <fileset dir = "${src}" id = "java.source.all"> <include name = "**/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.examples"> <include name = "**/Examples/**/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.tests"> <include name = "**/Tests/*.java" /> </fileset> <fileset dir = "${src}" id = "java.source.project"> <include name = "**/*.java" /> <exclude name = "**/Examples/**/*.java" /> <exclude name = "**/Tests/**/*.java" /> </fileset> I have also used macrodef to compile the java files so the javac task does not need to be repeated multiple times. The macro looks like this: <macrodef name="compile"> <attribute name="sourceref"/> <sequential> <javac srcdir = "${src}" destdir = "${build}" classpathref = "classpath" includeantruntime = "no" debug = "${debug}"> <filelist dir="." files="@{sourceref}" /> <-- email is about this </javac> </sequential> What I'm trying to do is compile only the classes that are needed for specific targets not all the targets in the source tree. And do so without having to specify the files every time. Here are how the targets are defined: <target name = "compile-examples" depends = "init"> <compile sourceref = "${toString:java.source.examples}" /> </target> <target name = "compile-project" depends = "init"> <compile sourceref = "${toString:java.source.project}" /> </target> <target name = "compile-tests" depends = "init"> <compile sourceref = "${toString:java.source.tests}" /> </target> As you can see each target specifies the java files to be compiled as a simi-colon separated list of absolute file names. The only problem with this is that javac does not support filelist. It also does not support fileset, path or pathset. I've tried using but it treats the list as a single file name. Another thing I tried is sending the reference directly (not using toString) and using but include does not have a ref attribute. SO THE QUESTION IS: How do you get the javac task to use a reference to a fileset that was defined in another part of the build file? I'm not interested in solutions that cause me to have multiple javac tasks. Completely re-writting the macro is acceptable. Changes to the targets are also acceptable provided redundant code between targets is kept to a minimum. p.s. Another problem is that fileset wants a comma separated list. I've only done a brief search for a way to convert semi-colons to commas and haven't found a way to do that. p.p.s. Sorry for the yelling but some people are too quick to post responses that don't address the subject.

    Read the article

  • how to define a field of view for the entire map for shadow?

    - by Mehdi Bugnard
    I recently added "Shadow Mapping" in my XNA games to include shadows. I followed the nice and famous tutorial from "Riemers" : http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series3/Shadow_map.php . This code work nice and I can see my source of light and shadow. But the problem is that my light source does not match the field of view that I created. I want the light covers the entire map of my game. I don't know why , but the light only affect 2-3 cubes of my map. ScreenShot: (the emission of light illuminates only 2-3 blocks and not the full map) Here is my code i create the fieldOfView for LightviewProjection Matrix: Vector3 lightDir = new Vector3(10, 52, 10); lightPos = new Vector3(10, 52, 10); Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(105, 50, 105), new Vector3(0, 1, 0)); Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver2, 1f, 20f, 1000f); lightsViewProjectionMatrix = lightsView * lightsProjection; As you can see , my nearPlane and FarPlane are set to 20f and 100f . So i don't know why the light stop after 2 cubes. it's should be bigger Here is set the value to my custom effect HLSL in the shader file /* SHADOW VALUE */ effectWorld.Parameters["LightDirection"].SetValue(lightDir); effectWorld.Parameters["xLightsWorldViewProjection"].SetValue(Matrix.Identity * .lightsViewProjectionMatrix); effectWorld.Parameters["xWorldViewProjection"].SetValue(Matrix.Identity * arcadia.camera.View * arcadia.camera.Projection); effectWorld.Parameters["xLightPower"].SetValue(1f); effectWorld.Parameters["xAmbient"].SetValue(0.3f); Here is my custom HLSL shader effect file "*.fx" // This sample uses a simple Lambert lighting model. float3 LightDirection = normalize(float3(-1, -1, -1)); float3 DiffuseLight = 1.25; float3 AmbientLight = 0.25; uniform const float3 DiffuseColor = 1; uniform const float Alpha = 1; uniform const float3 EmissiveColor = 0; uniform const float3 SpecularColor = 1; uniform const float SpecularPower = 16; uniform const float3 EyePosition; // FOG attribut uniform const float FogEnabled ; uniform const float FogStart ; uniform const float FogEnd ; uniform const float3 FogColor ; float3 cameraPos : CAMERAPOS; texture Texture; sampler Sampler = sampler_state { Texture = (Texture); magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = mirror; AddressV = mirror; }; texture xShadowMap; sampler ShadowMapSampler = sampler_state { Texture = <xShadowMap>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = clamp; AddressV = clamp; }; /* *************** */ /* SHADOW MAP CODE */ /* *************** */ struct SMapVertexToPixel { float4 Position : POSITION; float4 Position2D : TEXCOORD0; }; struct SMapPixelToFrame { float4 Color : COLOR0; }; struct SSceneVertexToPixel { float4 Position : POSITION; float4 Pos2DAsSeenByLight : TEXCOORD0; float2 TexCoords : TEXCOORD1; float3 Normal : TEXCOORD2; float4 Position3D : TEXCOORD3; }; struct SScenePixelToFrame { float4 Color : COLOR0; }; float DotProduct(float3 lightPos, float3 pos3D, float3 normal) { float3 lightDir = normalize(pos3D - lightPos); return dot(-lightDir, normal); } SSceneVertexToPixel ShadowedSceneVertexShader(float4 inPos : POSITION, float2 inTexCoords : TEXCOORD0, float3 inNormal : NORMAL) { SSceneVertexToPixel Output = (SSceneVertexToPixel)0; Output.Position = mul(inPos, xWorldViewProjection); Output.Pos2DAsSeenByLight = mul(inPos, xLightsWorldViewProjection); Output.Normal = normalize(mul(inNormal, (float3x3)World)); Output.Position3D = mul(inPos, World); Output.TexCoords = inTexCoords; return Output; } SScenePixelToFrame ShadowedScenePixelShader(SSceneVertexToPixel PSIn) { SScenePixelToFrame Output = (SScenePixelToFrame)0; float2 ProjectedTexCoords; ProjectedTexCoords[0] = PSIn.Pos2DAsSeenByLight.x / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; ProjectedTexCoords[1] = -PSIn.Pos2DAsSeenByLight.y / PSIn.Pos2DAsSeenByLight.w / 2.0f + 0.5f; float diffuseLightingFactor = 0; if ((saturate(ProjectedTexCoords).x == ProjectedTexCoords.x) && (saturate(ProjectedTexCoords).y == ProjectedTexCoords.y)) { float depthStoredInShadowMap = tex2D(ShadowMapSampler, ProjectedTexCoords).r; float realDistance = PSIn.Pos2DAsSeenByLight.z / PSIn.Pos2DAsSeenByLight.w; if ((realDistance - 1.0f / 100.0f) <= depthStoredInShadowMap) { diffuseLightingFactor = DotProduct(xLightPos, PSIn.Position3D, PSIn.Normal); diffuseLightingFactor = saturate(diffuseLightingFactor); diffuseLightingFactor *= xLightPower; } } float4 baseColor = tex2D(Sampler, PSIn.TexCoords); Output.Color = baseColor*(diffuseLightingFactor + xAmbient); return Output; } SMapVertexToPixel ShadowMapVertexShader(float4 inPos : POSITION) { SMapVertexToPixel Output = (SMapVertexToPixel)0; Output.Position = mul(inPos, xLightsWorldViewProjection); Output.Position2D = Output.Position; return Output; } SMapPixelToFrame ShadowMapPixelShader(SMapVertexToPixel PSIn) { SMapPixelToFrame Output = (SMapPixelToFrame)0; Output.Color = PSIn.Position2D.z / PSIn.Position2D.w; return Output; } /* ******************* */ /* END SHADOW MAP CODE */ /* ******************* */ / For rendering without instancing. technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 ShadowMapVertexShader(); PixelShader = compile ps_2_0 ShadowMapPixelShader(); } } technique ShadowedScene { /* pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } */ pass Pass1 { VertexShader = compile vs_2_0 ShadowedSceneVertexShader(); PixelShader = compile ps_2_0 ShadowedScenePixelShader(); } } technique SimpleFog { pass Pass0 { VertexShader = compile vs_2_0 VSBasicTx(); PixelShader = compile ps_2_0 PSBasicTx(); } } I edited my fx file , for show you only information and functions about the shadow ;-)

    Read the article

  • Flex socket crossdomain

    - by Yonatan Betzer
    I am trying to connect to a socket server from flex. I read, on adobe's documentation, the client automatically sends a "request-policy-file" xml element to the socket before allowing it to be opened, and should receive a policy file. The client sends the xml element as expected, My server receives it (on the port I want to use, port 6104) and replies on the same port with: <?xml version="1.0"?> <!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"> <cross-domain-policy> <site-control permitted-cross-domain-policies="all"/> <allow-access-from domain="*" to-ports="*"/> </cross-domain-policy> To the best of my knowledge, this should be the most permissive policy available on a socket. The flash player logs indicate a timeout looking for the socket policy file, although I know my socket returned the response immediately. What should I do ?

    Read the article

  • Regex and unicode

    - by dbr
    I have a script that parses the filenames of TV episodes (show.name.s01e02.avi for example), grabs the episode name (from the www.thetvdb.com API) and automatically renames them into something nicer (Show Name - [01x02].avi) The script works fine, that is until you try and use it on files that have Unicode show-names (something I never really thought about, since all the files I have are English, so mostly pretty-much all fall within [a-zA-Z0-9'\-]) How can I allow the regular expressions to match accented characters and the likes? Currently the regex's config section looks like.. config['valid_filename_chars'] = """0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@£$%^&*()_+=-[]{}"'.,<>`~? """ config['valid_filename_chars_regex'] = re.escape(config['valid_filename_chars']) config['name_parse'] = [ # foo_[s01]_[e01] re.compile('''^([%s]+?)[ \._\-]\[[Ss]([0-9]+?)\]_\[[Ee]([0-9]+?)\]?[^\\/]*$'''% (config['valid_filename_chars_regex'])), # foo.1x09* re.compile('''^([%s]+?)[ \._\-]\[?([0-9]+)x([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.s01.e01, foo.s01_e01 re.compile('''^([%s]+?)[ \._\-][Ss]([0-9]+)[\.\- ]?[Ee]([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.103* re.compile('''^([%s]+)[ \._\-]([0-9]{1})([0-9]{2})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.0103* re.compile('''^([%s]+)[ \._\-]([0-9]{2})([0-9]{2,3})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])), ]

    Read the article

  • Compiling a class using Java code using process

    - by Noona
    I have this piece of code that compiles a class called tspClassName, when I compile using this code: Process compileProc = null; try { compileProc = Runtime.getRuntime().exec("javac -classpath ." + File.separator + "src" + File.separator + File.separator + "generated." + tspClassName + ".java -d ." + File.separator + "bin"); // catch exception if (compileProc.exitValue() != 0) { System.out.println("Compile exit status: " + compileProc.exitValue()); System.err.println("Compile error:" + compileProc.getErrorStream()); it outputs this: "Compile exit status: 2 Compile error:java.io.FileInputStream@17182c1" The class tspClassName.java compiles without errors otherwise, so I am guessing it has to do with the path,and in my eclipse project, the tspClassName.java resides in package homework4.generated inside src, is there something wrong with the path that I use in the code? thanks

    Read the article

  • Security Policy not working, as3

    - by VideoDnd
    How to I get my security policy working? My parent swf parses an XML doc and loads 2 children. It throws a 2148 security error, and only works in the Flash IDE. PARENT SWF flash.system.Security.loadPolicyFile("crossdomain.xml"); I've referenced my security file from my swf. I Also published my parent swf as 'network only' and put all the crossdomain.xml and everything else in the same folder. I need to click on the animations and have them place from a local computer at a kiosk. Any suggestions? POLICY FILE <?xml version=\"1.0\"?> <!DOCTYPE cross-domain-policy SYSTEM \"/xml/dtds/cross-domain-policy.dtd\"> <cross-domain-policy> <site-control permitted-cross-domain-policies=\"master-only\"/> <allow-access-from domain=\"*\" to-ports=\"*\" secure=\"false\" /> </cross-domain-policy>"

    Read the article

  • SMB2 traffic crashes network?

    - by Phil Cross
    We've been having significant network slowdown issues over the past few weeks, primarily on a Friday morning. We run Windows 7 client machines, with Windows Server 2008 R2 servers. What generally happens is the network starts to slow down massively at 08:55 and resumes normal speeds at around 09:20 This affects everything on the network from logging on, resetting passwords, opening programs and files etc. On my client machine, Physical Memory usage remains at around 40% (normal) and CPU usage hovers around 0-10% idle. The servers show memory usage spikes massively and remains quite intense during the times mentioned above. I have taken several wireshark captures, both during the slowdown and when the network operates fine. One of the main things I noticed is the increase in SMB2 entries in the wireshark log during the slowdown. Record Time Source Destination Protocol Length Info 382 3.976460000 10.47.35.11 10.47.32.3 SMB2 362 Create Request File: pcross\My Documents 413 4.525047000 10.47.35.11 10.47.32.3 SMB2 146 Close Request File: pcross\My Documents 441 5.235927000 10.47.32.3 10.47.35.11 SMB2 298 Create Response File: pcross\My Documents\Downloads 442 5.236199000 10.47.35.11 10.47.32.3 SMB2 260 Find Request File: pcross\My Documents\Downloads SMB2_FIND_ID_BOTH_DIRECTORY_INFO Pattern: *;Find Request File: pcross\My Documents\Downloads SMB2_FIND_ID_BOTH_DIRECTORY_INFO Pattern: * 573 6.327634000 10.47.35.11 10.47.32.3 SMB2 146 Close Request File: pcross\My Documents\Downloads 703 7.664186000 10.47.35.11 10.47.32.3 SMB2 394 Create Request File: pcross\My Documents\Downloads\WestlandsProspectus\P24 __ P21.pdf These are some of the SMB2 records from a list of a couple of hundred which original from my computer with a destination of the fileserver. One of the interesting things to note is the last entry in the examples above is for a PDF file. That file was not open anywhere on my computer, or on anyone elses. No folders with the files in were open either. When I took another capture when the network was running fine, there were hardly any SMB2 entries, and the ones that were displayed were mainly from Wireshark. We currently have around 800 computers, 90 Macs and 200 Laptops and Netbooks. Our concern is if this traffic is happening on my computer, is it happening on other computers, and if so, would those computers be adding to the slow network issues? Again, this only happens during certain times. We're pretty sure its not the our antivirus. Is there anything to narrow down whats initializing this SMB traffic during the particular times? Or if anyone has any extra advice, or links to resources it would be appreciate.

    Read the article

  • Is there a 64-bit Windows 7 driver for a Logitech Wingman Formula Force wheel?

    - by Bob Cross
    I've used my Logitech Wingman Formula Force wheel for more years than I can remember (I definitely had it back when I was playing Viper Racing, though). Sadly, Logitech does not appear to support the wheel in any capacity, even as an analog input device. At this point, I'm looking for any sort of driver that will take input from the device. I don't care at all about the force feedback but, if it were available, I'd be happy to take it. The target operating system is Windows 7 64-bit. EDIT: Just in case I'm not clear above: I know where the standard driver download sites are and have tried them out. The problem is that Logitech has officially end-of-lifed this wheel so their latest software specifically does not support it. Sadly, their older software that does support it is 32-bit only so I'm out of luck on that front.

    Read the article

  • Building a Social Networking website, should I have separate servers for certain parts of the site? [closed]

    - by Dylan Cross
    I have been working on building a social networking website, I'm pretty committed to this and I think I have something that could work out. I hope to be launching it January 1st, and so I have a question for server setup and such. Facebook has separate domain names/servers for their photos (and I don't know what else), so I would assume that by doing this it would help spread the server loads out. So I am wondering if it would make a very big difference in speed if I had my main server for basically everything, but had another server and such that the photos would be stored on and access them the same way that facebook does.

    Read the article

  • How can I recover a Fedora 12 installation that is showing signs of disk errors?

    - by Bob Cross
    I am currently overseas (i.e., very far from my normal library of tools) and my primary machine that would normally act as the data server in the performance test that we're trying to run is failing to boot to Fedora 12 properly. This is a machine that, as of yesterday, was booting fine. However, this morning, very strange portions of the boot process were complaining with messages such as "unexpected 0x0 in rpcbind" and "bad file descriptor" (I don't have the error in front of me - scavenged a windows installation to get onto serverfault). Eventually, the boot hung for a long time at the NFS service and then brought up what looked like the KDE login screen but neither the mouse nor keyboard functioned. In olden days, I would try to get to a point where I could manage to run fsck and pray that the bad sectors would come back into alignment just long enough for me to scrape the critical data off of the machine. However, now that we live in the future, it seems like our options in situations like this should be a little more varied. Is there a way to recover a Fedora 12 installation with bad disk sectors that won't boot properly? For completeness, I am comfortable working with bootable recovery distros-on-CD and such but I don't know which one is likely to work best with modern Fedora. In the absence of guidance, I'm frantically torrenting the Fedora 12 Live CD and DVD, hoping to try rescue mode before tomorrow morning.

    Read the article

  • How do I install yum on Redhat Enterprise 4?

    - by Bob Cross
    For historical reasons, one of the machines that I manage has a Redhat Enterprise 4 boot disk (among others). Every now and then, we have to boot into RHEL4 to bring up some of the legacy software that we support and connect to. Since it's a fringe system, the Redhat support has long since lapsed and I can't convince myself that it would be worth paying just to get RPMs that I can go and get for myself. That said, the default RHEL tools are heavily biased against letting you do exactly that. I would like to install yum and use that as my package discovery and installation. So, is there an installation guide to integrating yum with an older RHEL 4 system?

    Read the article

  • Is there a direct URL to download Java JDK updates?

    - by Bob Cross
    I have a whole set of machines that are on the other side of a firewall configured to prevent all Javascript from functioning. All of them (Linux 32 and 64 bit configurations) must be updated to Java 6 update 20. This is a problem given Sun/Oracle's URL redirector and download manager: they simply don't appear or don't work. Is there a URL to download the JDK updates and bypass the redirect? Obviously, a yum configuration that would allow for automatic updates would be optimal but I'd be happy to just have the rpm file.

    Read the article

  • How do I automatically download files to a Kindle 2 whenever its USB connection is plugged into a Windows 7 machine?

    - by Bob Cross
    The inspiration of this question is the Gadget Lab article How To Make Your Kindle Into an Automatic Instapaper. In that article, they describe an implementation of an Automator workflow that: Detects the connection of a portable disk (which the workflow assumes is the Kindle 2). Downloads the Instapaper mobi file containing the text version of the articles marked for "Read Later" to the computer. Downloads the mobi file from the computer to the Kindle 2. All of this is straightforward in terms of functionality but it is not immediately obvious how to do step 1 on a Windows box. Is the Task Scheduler the first step? If so, which event should be the trigger for the rest of the task?

    Read the article

  • Computers on network crashing

    - by Phil Cross
    We have recently upgraded our network to Windows 7 clients with Windows server 2008 servers. The upgrade was completed by the end of September and until now has been fine (apart from the minor bugs). Recently (within the last 2 weeks) we've notice all computers on the network (around 1000) start to slow down to the point their unusable. It starts at about 08:45 and finishes at 09:15. Because of this, we think something may be broadcasting across the network. This happens every day, between these times. I cant use my computer at all at the slowdown peak, and looking at task managers performance graphs, Physical memory is hovering around 35% and CPU usage is at 0-10% (idle) yet still crashing. I've looked on DHCPs server log and cant see anything which stands out. The only change we made prior to the slowdown was installing adobe CS6 on some computers, however the slowdown affects computers without CS6. We have 2 physical machines, each with around 5-7 virtual machines running on them with ample memory. Does anyone have any suggestions as to what we can do to narrow down whats causing the crashes? Any help, suggestions or advice would be appreciated.

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • Project compilation requires a class that is not used anywhere

    - by Susei
    When I build with ant my project that uses libgdx, I get a strange error. It says that a class com.google.gwt.dom.client.ImageElement is not found, but it isn't used at all in the code. How can I find what makes this class necessary? Even searching over the whole project doesn't give any results. It says that error is at PixmapTextureAtlas.java:16 (class source), but there is no code that uses that ImageElement class. Adding the library containing com.google.gwt.dom.client.ImageElement class helps, of course, but I'd like to figure out why this class in needed. Here is the place in ant log that tells of the actual error: Compiling 3 source files to /home/suseika/Projects/tendiwa/client/bin /home/suseika/Projects/tendiwa/client/src/org/tendiwa/client/PixmapTextureAtlas.java:16: error: cannot access ImageElement class file for com.google.gwt.dom.client.ImageElement not found Here is the whole ant log: /usr/lib/jvm/java-7-oracle/bin/java -Xmx128m -Xss2m -Dant.home=/opt/intellijidea/lib/ant -Dant.library.dir=/opt/intellijidea/lib/ant/lib -Dfile.encoding=UTF-8 -classpath /opt/intellijidea/lib/ant/lib/ant-apache-regexp.jar:/opt/intellijidea/lib/ant/lib/ant-swing.jar:/opt/intellijidea/lib/ant/lib/ant-apache-xalan2.jar:/opt/intellijidea/lib/ant/lib/ant-jdepend.jar:/opt/intellijidea/lib/ant/lib/ant-apache-resolver.jar:/opt/intellijidea/lib/ant/lib/ant-jsch.jar:/opt/intellijidea/lib/ant/lib/ant.jar:/opt/intellijidea/lib/ant/lib/ant-testutil.jar:/opt/intellijidea/lib/ant/lib/ant-launcher.jar:/opt/intellijidea/lib/ant/lib/ant-apache-bsf.jar:/opt/intellijidea/lib/ant/lib/ant-commons-logging.jar:/opt/intellijidea/lib/ant/lib/ant-netrexx.jar:/opt/intellijidea/lib/ant/lib/ant-junit.jar:/opt/intellijidea/lib/ant/lib/ant-commons-net.jar:/opt/intellijidea/lib/ant/lib/ant-apache-bcel.jar:/opt/intellijidea/lib/ant/lib/ant-antlr.jar:/opt/intellijidea/lib/ant/lib/ant-apache-log4j.jar:/opt/intellijidea/lib/ant/lib/ant-jai.jar:/opt/intellijidea/lib/ant/lib/ant-apache-oro.jar:/opt/intellijidea/lib/ant/lib/ant-jmf.jar:/opt/intellijidea/lib/ant/lib/ant-javamail.jar:/usr/lib/jvm/java-7-oracle/lib/tools.jar:/opt/intellijidea/lib/idea_rt.jar com.intellij.rt.ant.execution.AntMain2 -logger com.intellij.rt.ant.execution.IdeaAntLogger2 -inputhandler com.intellij.rt.ant.execution.IdeaInputHandler -buildfile /home/suseika/Projects/tendiwa/client/build.xml jar build.xml property path description compile ant property property property description compile mkdir javac jar ant property description _core_src_available available ontology antcall property description _core_src_available available _build_core ant property property compile echo /home/suseika/Projects/tendiwa/client mkdir javac jar jar Building jar: /home/suseika/Projects/tendiwa/MainModule.jar description tempfile mkdir Created dir: /tmp/tendiwa373148820 unjar Expanding: /home/suseika/Projects/tendiwa/MainModule.jar into /tmp/tendiwa373148820 Expanding: /home/suseika/Projects/tendiwa/tendiwa-backend.jar into /tmp/tendiwa373148820 Expanding: /home/suseika/Projects/tendiwa/tendiwa-ontology.jar into /tmp/tendiwa373148820 copy Copying 1 file to /tmp/tendiwa373148820 java Created item short_sword Created item short_bow Created item bucket Created item boot Created item steel_morningstar Created item rifle_ammo Created item handAxe Created item iron_armor Created item steel_mace Created item jacket Created item fedora Created item wooden_arrow Saving sources to /tmp/tendiwa373148820/ontology/src tendiwa/resources/SoundTypes.java tendiwa/resources/CharacterTypes.java tendiwa/resources/ObjectTypes.java tendiwa/resources/FloorTypes.java tendiwa/resources/ItemTypes.java tendiwa/resources/MaterialTypes.java mkdir mkdir mkdir Created dir: /tmp/tendiwa373148820/ontology/bin javac jar Building jar: /home/suseika/Projects/tendiwa/tendiwa-ontology.jar echo Resources source code generated ant property property compile echo /home/suseika/Projects/tendiwa/client mkdir javac jar jar jar Building jar: /home/suseika/Projects/tendiwa/MainModule.jar mkdir javac /home/suseika/Projects/tendiwa/client/build.xml:25: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1150) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:912) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1251) at org.apache.tools.ant.Main.runBuild(Main.java:809) at org.apache.tools.ant.Main.startAnt(Main.java:217) at org.apache.tools.ant.Main.start(Main.java:180) at org.apache.tools.ant.Main.main(Main.java:268) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) /home/suseika/Projects/tendiwa/client/build.xml (25:46)'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds Compiling 3 source files to /home/suseika/Projects/tendiwa/client/bin /home/suseika/Projects/tendiwa/client/src/org/tendiwa/client/PixmapTextureAtlas.java:16: error: cannot access ImageElement class file for com.google.gwt.dom.client.ImageElement not found 1 error /home/suseika/Projects/tendiwa/client/build.xml:25: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1150) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:912) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1251) at org.apache.tools.ant.Main.runBuild(Main.java:809) at org.apache.tools.ant.Main.startAnt(Main.java:217) at org.apache.tools.ant.Main.start(Main.java:180) at org.apache.tools.ant.Main.main(Main.java:268) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) /home/suseika/Projects/tendiwa/client/build.xml:25: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1150) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:912) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1251) at org.apache.tools.ant.Main.runBuild(Main.java:809) at org.apache.tools.ant.Main.startAnt(Main.java:217) at org.apache.tools.ant.Main.start(Main.java:180) at org.apache.tools.ant.Main.main(Main.java:268) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) Ant build completed with 3 errors one warning in 4s at 10/30/13 3:09 AM Here is a part of ant file where this error appears: <path id="tendiwa.jars"> <fileset dir="../libs"> <include name="**/*.jar"/> </fileset> <pathelement path="../tendiwa-backend.jar"/> <pathelement path="../tendiwa-ontology.jar"/> <!--<fileset dir="/usr/share/java" includes="gwt*.jar"/>--> </path> <target name="compile"> <ant dir="../MainModule" target="jar"/> <mkdir dir="bin"/> <javac destdir="bin" failonerror="true"> <classpath> <path refid="tendiwa.jars"/> <!--temporary--> <pathelement path="../tendiwa-ontology.jar"/> <!--temporary--> <pathelement path="../MainModule.jar"/> <fileset dir="../libs" includes="**/*.jar"/> </classpath> <src> <pathelement path="Desktop/src"/> <pathelement path="src"/> </src> </javac> </target>

    Read the article

  • Python regex look-behind requires fixed-width pattern

    - by invictus
    Hi When trying to extract the title of a html-page I have always used the following regex: (?<=<title.*>)([\s\S]*)(?=</title>) Which will extract everything between the tags in a document and ignore the tags themselves. However, when trying to use this regex in Python it raises the following Exception: Traceback (most recent call last): File "test.py", line 21, in pattern = re.compile('(?<=)([\s\S]*)(?=)') File "C:\Python31\lib\re.py", line 205, in compile return _compile(pattern, flags) File "C:\Python31\lib\re.py", line 273, in _compile p = sre_compile.compile(pattern, flags) File "C:\Python31\lib\sre_compile.py", line 495, in compile code = _code(p, flags) File "C:\Python31\lib\sre_compile.py", line 480, in _code _compile(code, p.data, flags) File "C:\Python31\lib\sre_compile.py", line 115, in _compile raise error("look-behind requires fixed-width pattern") sre_constants.error: look-behind requires fixed-width pattern The code I am using is: pattern = re.compile('(?<=<title.*>)([\s\S]*)(?=</title>)') m = pattern.search(f) if I do some minimal adjustments it works: pattern = re.compile('(?<=<title>)([\s\S]*)(?=</title>)') m = pattern.search(f) This will, however, not take into account potential html titles that for some reason have attributes or similar. Anyone know a good workaround for this issue? Any tips are appreciated.

    Read the article

  • TFS 2010 Build gives WorkItemStore error when Create Work Item on Failure is enabled

    - by Derek Morrison
    I'm using TFS 2010 Build. I have a build definition that uses the DefaultTemplate.xaml template that's stock in TFS 2010, and the Create Work Item on Failure property is set to True in the build definition. I deliberately made a change in my project that breaks the build. When the build runs, I see the compilation error reflected in the TFS Build log within Visual Studio, but I get the error "Value cannot be null. Parameter name: WorkItemStore" when TFS Build next tries to generate a Work Item for the broken build. I tracked down the activity in DefaultTemplate.xaml (see the rather lengthy path to it below) where the Work Item is created for a broken build, and I see it uses the Microsoft.TeamFoundation.Build.Workflow.Activities.OpenWorkItem class to create the Work Item. The appropriate values seemed to be filled out in the Properties window for the Create Work Item activity, so I don't see where I can pass WorkItemStore to it and I don't even know appropriate values for this setting. Path to the Create Work Item activity: Process Sequence Run On Agent Try Compile, Test, and Associate Changesets and Work Items Sequence Compile, Test, and Associate Changesets and Work Items Try Compile and Test Compile and Test For Each Configuration in BuildSettings.PlatformConfigurations Compile and Test for Configuration If BuildSettings.HasProjectsToBuild For Each Project in BuildSettings.ProjectsToBuild Try to Compile the Project Handle Exception If CreateWorkItem Create Work Item for non-Shelveset Builds Create Work Item

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >