Search Results

Search found 3836 results on 154 pages for 'argument'.

Page 66/154 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Using rowDiffs() to calculate difference in values in matrix

    - by user1723765
    I'm using the rowDiffs() command to calculate the step by step difference in 116 rows in a matrix. I get the following error: Error in r[i1] - r[-length(r):-(length(r) - lag + 1L)] : non-numeric argument to binary operator I have no idea why this is happening. I could take the diff() separately for each row and it would work. Any ideas? Here's the data: https://dl.dropbox.com/u/22681355/data.csv Code: a=rowDiffs(data)

    Read the article

  • will be assumed to return id

    - by Andrew_E
    I import oourafft.h and oourafft.m class, but get strange error while ooura initialize. OouraFFT * myFFT = [OouraFFT initForSignalsOfLength:1024 numberOfWindows:10]; OouraFFT may not respond to +initForSignalsOfLength: numberOfWindows Messages without matching method signature will be assumed to return 'id' and accept argument - Warning I think that it some kind of error import .h file

    Read the article

  • What kind of data type is this?

    - by mystify
    In an class header I have seen something like this: enum { kAudioSessionProperty_PreferredHardwareSampleRate = 'hwsr', // Float64 kAudioSessionProperty_PreferredHardwareIOBufferDuration = 'iobd' // Float32 }; Now I wonder what data type such an kAudioSessionProperty_PreferredHardwareSampleRate actually is? I mean this looks like plain old C, but in Objective-C I would write @"hwsr" if I wanted to make it a string. I want to pass such an "constant" or "enum thing" as argument to an method.

    Read the article

  • mysql_query() returns returns true, but mysql_num_rows() and mysql_fetch_array() give "not a valid r

    - by zlance4012
    Here is the code in question: -----From index.php----- require_once('includes/DbConnector.php'); // Create an object (instance) of the DbConnector $connector = new DbConnector(); // Execute the query to retrieve articles $query1 = "SELECT id, title FROM articles ORDER BY id DESC LIMIT 0,5"; $result = $connector-query($query1); echo "vardump1:"; var_dump($result); echo "\n"; /(!line 17!)/ echo "Number of rows in the result of the query:".mysql_num_rows($result)."\n"; // Get an array containing the results. // Loop for each item in that array while ($row = $connector-fetchArray($result)){ echo ' '; echo $row['title']; echo ' '; -----end index.php----- -----included DbConnector.php----- $settings = SystemComponent::getSettings(); // Get the main settings from the array we just loaded $host = $settings['dbhost']; $db = $settings['dbname']; $user = $settings['dbusername']; $pass = $settings['dbpassword']; // Connect to the database $this-link = mysql_connect($host, $user, $pass); mysql_select_db($db); register_shutdown_function(array(&$this, 'close')); } //end constructor //* Function: query, Purpose: Execute a database query * function query($query) { echo "Query Statement: ".$query."\n"; $this-theQuery = $query; return mysql_query($query, $this-link) or die(mysql_error()); } //* Function: fetchArray, Purpose: Get array of query results * function fetchArray($result) { echo "<|"; var_dump($result); echo "| \n"; /(!line 50!)/$res= mysql_fetch_array($result) or die(mysql_error()); echo $res['id']."-".$res['title']."-".$res['imagelink']."-".$res['text']; return $res; } -----end DbConnector.php----- -----Output----- Query Statement: SELECT id, title FROM articles ORDER BY id DESC LIMIT 0,5 vardump1:bool(true) PHP Error Message Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in /path to/index.php on line 17 Number of rows in the result of the query: <|bool(true) | PHP Error Message Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /path to/DbConnector.php on line 50

    Read the article

  • regarding object recycling

    - by ajaycv
    I have a question. What is wrong with regards to the below code: ArrayList tempList2 = new ArrayList(); tempList2 = getXYZ(tempList1, tempList2); //method getXYZ getXYZ(ArrayList tempList1, ArrayList tempList2) { //does some logic and adds objects into tempList2 return tempList2; } The code will get executed but it seems by passing tempList2 to the getXYZ method argument, it is doing object recycling. My question is, Is recycling the tempList2 arraylist object correct?

    Read the article

  • Look up or insert new element to string list in Haskell

    - by nightscream
    So I want to have a function that takes a String and a list as an argument, and checks if that element is already on the list, if it is, returns the same list, if it isnt, adds it to the list and returns it, 'im a begginer with haskell so heres what I have tried with no sucess: check:: String ->[String] ->[String] check x [] = []++[x] check x (y:xs) | x==y = (y:xs) | otherwise = check x xs Can someone point me the way ? thks

    Read the article

  • How can I view the source code for a particular `predict` function?

    - by merlin2011
    Based on the documentation, predict is a polymorphic function in R and a different function is actually called depending on what is passed as the first argument. However, the documentation does not give any information about the names of the functions that predict actually invokes for any particular class. Normally, one could type the name of a function to get its source, but this does not work with predict. If I want to view the source code for the predict function when invoked on objects of the type glmnet, what is the easiest way?

    Read the article

  • How to disable proxy requests once a server has been added to spammers "open proxy" list?

    - by Matt
    Hello all, I've just started in a new company, and have been going over the setup of their Apache webserver conf files... only to find that they've had their apache servers set up as open proxies available to all the world for the last two months. I've already set ProxyRequests Off in the httpd.conf file and restarted the web server, but the access log file is still growing at a horrendous rate (about a gig a day). I noticed that another question was posted on here about this (http://serverfault.com/questions/63715/apache-hit-with-proxy-request), but their access log was supposedly returning 404 errors, while mine appears to be returning 403 and 404 codes... Is this correct? Here are a few lines out of my access log: 87.118.118.124 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 HTTP/1.0" 404 219 "http://www.c5interlude.ru/torrent/viewtopic.php?p=2501" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=300x250&section=790074 HTTP/1.0" 404 200 "http://www.newbiegamer.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Alexa Toolbar)" 122.224.55.222 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1" 403 214 "http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar" "Mozilla/4.0" 58.55.21.40 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.cpx24.com/ad1.js HTTP/1.0" 404 204 "http://thebighits.com/?id=aibux" "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)" 122.226.223.188 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.reduxmedia.com/st?ad_type=iframe&ad_size=160x600&section=798636 HTTP/1.0" 404 200 "http://www.gvvu.com" "Mozilla/4.0 (compatible; MSIE 5.5; AOL 6.0; Windows 98; Win 9x 4.90)" 84.51.109.31 - - [16/Mar/2010:10:56:36 -0400] "GET http://www.kslp.ru/forum/index.php HTTP/1.0" 404 213 "http://www.kslp.ru/forum/index.php" "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 6.0 ; .NET CLR 2.0.50215; SL Commerce Client v1.0; Tablet PC 2.0" 122.224.48.49 - - [16/Mar/2010:10:56:36 -0400] "GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1" 403 214 "http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe" "Mozilla/4.0" 117.41.184.27 - - [16/Mar/2010:10:56:36 -0400] "GET http://ad.xtendmedia.com/st?ad_type=iframe&ad_size=728x90&section=657624 HTTP/1.0" 404 200 "http://www.raiseanimals.com" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; Alexa Toolbar)" And my corresponding error log entries: [Tue Mar 16 10:56:36 2010] [error] [client 87.118.118.124] File does not exist: C:/public_html/torrent, referer: http://www.c5interlude.ru/torrent/viewtopic.php?p=2501 [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.newbiegamer.com [Tue Mar 16 10:56:36 2010] [error] [client 122.224.55.222] (22)Invalid argument: Cannot map GET http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar HTTP/1.1 to file, referer: http://www.188woool.net/\xb4\xf3\xd4\xcb\xb4\xab\xca\xc0.rar [Tue Mar 16 10:56:36 2010] [error] [client 58.55.21.40] File does not exist: C:/public_html/ad1.js, referer: http://thebighits.com/?id=aibux [Tue Mar 16 10:56:36 2010] [error] [client 122.226.223.188] File does not exist: C:/public_html/st, referer: http://www.gvvu.com [Tue Mar 16 10:56:36 2010] [error] [client 84.51.109.31] File does not exist: C:/public_html/forum, referer: http://www.kslp.ru/forum/index.php [Tue Mar 16 10:56:36 2010] [error] [client 122.224.48.49] (22)Invalid argument: Cannot map GET http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe HTTP/1.1 to file, referer: http://www1.vip218.com/\xb2\xca\xba\xe7\xb4\xab\xca\xc0.exe [Tue Mar 16 10:56:36 2010] [error] [client 117.41.184.27] File does not exist: C:/public_html/st, referer: http://www.raiseanimals.com Does this in fact look like the server is blocking them correctly, and is there anything else that I could do better to cut down on my access log size? (perhaps block these requests from the server completely?) Thanks! Matt

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design

    - by SeanMcAlinden
    Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and caching mechanism For the latest code go to http://rapidioc.codeplex.com/ Before getting too involved in generating the proxy, I thought it would be worth while going through the intended design, this is important as the next step is to start creating the constructors for the proxy. Each proxy derives from a specified type The proxy has a corresponding constructor for each of the base type constructors The proxy has overrides for all methods and properties marked as Virtual on the base type For each overridden method, there is also a private method whose sole job is to call the base method. For each overridden method, a delegate is created whose sole job is to call the private method that calls the base method. The following class diagram shows the main classes and interfaces involved in the interception process. I’ll go through each of them to explain their place in the overall proxy.   IProxy Interface The proxy implements the IProxy interface for the sole purpose of adding custom interceptors. This allows the created proxy interface to be cast as an IProxy and then simply add Interceptors by calling it’s AddInterceptor method. This is done internally within the proxy building process so the consumer of the API doesn’t need knowledge of this. IInterceptor Interface The IInterceptor interface has one method: Handle. The handle method accepts a IMethodInvocation parameter which contains methods and data for handling method interception. Multiple classes that implement this interface can be added to the proxy. Each method override in the proxy calls the handle method rather than simply calling the base method. How the proxy fully works will be explained in the next section MethodInvocation. IMethodInvocation Interface & MethodInvocation class The MethodInvocation will contain one main method and multiple helper properties. Continue Method The method Continue() has two functions hidden away from the consumer. When Continue is called, if there are multiple Interceptors, the next Interceptors Handle method is called. If all Interceptors Handle methods have been called, the Continue method then calls the base class method. Properties The MethodInvocation will contain multiple helper properties including at least the following: Method Name (Read Only) Method Arguments (Read and Write) Method Argument Types (Read Only) Method Result (Read and Write) – this property remains null if the method return type is void Target Object (Read Only) Return Type (Read Only) DefaultInterceptor class The DefaultInterceptor class is a simple class that implements the IInterceptor interface. Here is the code: DefaultInterceptor namespace Rapid.DynamicProxy.Interception {     /// <summary>     /// Default interceptor for the proxy.     /// </summary>     /// <typeparam name="TBase">The base type.</typeparam>     public class DefaultInterceptor<TBase> : IInterceptor<TBase> where TBase : class     {         /// <summary>         /// Handles the specified method invocation.         /// </summary>         /// <param name="methodInvocation">The method invocation.</param>         public void Handle(IMethodInvocation<TBase> methodInvocation)         {             methodInvocation.Continue();         }     } } This is automatically created in the proxy and is the first interceptor that each method override calls. It’s sole function is to ensure that if no interceptors have been added, the base method is still called. Custom Interceptor Example A consumer of the Rapid.DynamicProxy API could create an interceptor for logging when the FirstName property of the User class is set. Just for illustration, I have also wrapped a transaction around the methodInvocation.Coninue() method. This means that any overriden methods within the user class will run within a transaction scope. MyInterceptor public class MyInterceptor : IInterceptor<User<int, IRepository>> {     public void Handle(IMethodInvocation<User<int, IRepository>> methodInvocation)     {         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name seting to: " + methodInvocation.Arguments[0]);         }         using (TransactionScope scope = new TransactionScope())         {             methodInvocation.Continue();         }         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name has been set to: " + methodInvocation.Arguments[0]);         }     } } Overridden Method Example To show a taster of what the overridden methods on the proxy would look like, the setter method for the property FirstName used in the above example would look something similar to the following (this is not real code but will look similar): set_FirstName public override void set_FirstName(string value) {     set_FirstNameBaseMethodDelegate callBase =         new set_FirstNameBaseMethodDelegate(this.set_FirstNameProxyGetBaseMethod);     object[] arguments = new object[] { value };     IMethodInvocation<User<IRepository>> methodInvocation =         new MethodInvocation<User<IRepository>>(this, callBase, "set_FirstName", arguments, interceptors);          this.Interceptors[0].Handle(methodInvocation); } As you can see, a delegate instance is created which calls to a private method on the class, the private method calls the base method and would look like the following: calls base setter private void set_FirstNameProxyGetBaseMethod(string value) {     base.set_FirstName(value); } The delegate is invoked when methodInvocation.Continue() is called within an interceptor. The set_FirstName parameters are loaded into an object array. The current instance, delegate, method name and method arguments are passed into the methodInvocation constructor (there will be more data not illustrated here passed in when created including method info, return types, argument types etc.) The DefaultInterceptor’s Handle method is called with the methodInvocation instance as it’s parameter. Obviously methods can have return values, ref and out parameters etc. in these cases the generated method override body will be slightly different from above. I’ll go into more detail on these aspects as we build them. Conclusion I hope this has been useful, I can’t guarantee that the proxy will look exactly like the above, but at the moment, this is pretty much what I intend to do. Always worth downloading the code at http://rapidioc.codeplex.com/ to see the latest. There will also be some tests that you can debug through to help see what’s going on. Cheers, Sean.

    Read the article

  • SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Introduction – Day 1 of 31

    - by pinaldave
    List of all the Interview Questions and Answers Series blogs Posts covering interview questions and answers always make for interesting reading.  Some people like the subject for their helpful hints and thought provoking subject, and others dislike these posts because they feel it is nothing more than cheating.  I’d like to discuss the pros and cons of a Question and Answer format here. Interview Questions and Answers are Helpful Just like blog posts, books, and articles, interview Question and Answer discussions are learning material.  The popular Dummy’s books or Idiots Guides are not only for “dummies,” but can help everyone relearn the fundamentals.  Question and Answer discussions can serve the same purpose.  You could call this SQL Server Fundamentals or SQL Server 101. I have administrated hundreds of interviews during my career and I have noticed that sometimes an interviewee with several years of experience lacks an understanding of the fundamentals.  These individuals have been in the industry for so long, usually working on a very specific project, that the ABCs of the business have slipped their mind. Or, when a college graduate is looking to get into the industry, he is not expected to have experience since he is just graduated. However, the new grad is expected to have an understanding of fundamentals and theory.  Sometimes after the stress of final exams and graduation, it can be difficult to remember the correct answers to interview questions, though. An interview Question and Answer discussion can be very helpful to both these individuals.  It is simply a way to go back over the building blocks of a topic.  Many times a simple review like this will help “jog” your memory, and all those previously-memorized facts will come flooding back to you.  It is not a way to re-learn a topic, but a way to remind yourself of what you already know. A Question and Answer discussion can also be a way to go over old topics in a more interesting manner.  Especially if you have been working in the industry, or taking lots of classes on the topic, everything you read can sound like a repeat of what you already know.  Going over a topic in a new format can make the material seem fresh and interesting.  And an interested mind will be more engaged and remember more in the end. Interview Questions and Answers are Harmful A common argument against a Question and Answer discussion is that it will give someone a “cheat sheet.” A new guy with relatively little experience can read the interview questions and answers, and then memorize them. When an interviewer asks him the same questions, he will repeat the answers and get the job. Honestly, is he good hire because he memorized the interview questions? Wouldn’t it be better for the interviewer to hire someone with actual experience?  The answer is not as easy as it seems – there are many different factors to be considered. If the interviewer is asking fundamentals-related questions only, he gets the answers he wants to hear, and then hires this first candidate – there is a good chance that he is hiring based on personality rather than experience.  If the interviewer is smart he will ask deeper questions, have more than one person on the interview team, and interview a variety of candidates.  If one interviewee happens to memorize some answers, it usually doesn’t mean he will automatically get the job at the expense of more qualified candidates. Another argument against interview Question and Answers is that it will give candidates a false sense of confidence, and that they will appear more qualified than they are. Well, if that is true, it will not last after the first interview when the candidate is asked difficult questions and he cannot find the answers in the list of interview Questions and Answers.  Besides, confidence is one of the best things to walk into an interview with! In today’s competitive job market, there are often hundreds of candidates applying for the same position.  With so many applicants to choose from, interviewers must make decisions about who to call back and who to hire based on their gut feeling.  One drawback to reading an interview Question and Answer article is that you might sound very boring in your interview – saying the same thing as every single candidate, and parroting answers that sound like someone else wrote them for you – because they did.  However, it is definitely better to go to an interview prepared, just make sure that you give a lot of thought to your answers to make them sound like your own voice.  Remember that you will be hired based on your skills as well as your personality, so don’t think that having all the right answers will make get you hired.  A good interviewee will be prepared, confident, and know how to stand out. My Opinion A list of interview Questions and Answers is really helpful as a refresher or for beginners. To really ace an interview, one needs to have real-world, hands-on experience with SQL Server as well. Interview questions just serve as a starter or easy read for experienced professionals. When I have to learn new technology, I often search online for interview questions and get an idea about the breadth and depth of the technology. Next Action I am going to write about interview Questions and Answers for next 30 days. I have previously written a series of interview questions and answers; now I have re-written them keeping the latest version of SQL Server and current industry progress in mind. If you have faced interesting interview questions or situations, please write to me and I will publish them as a guest post. If you want me to add few more details, leave a comment and I will make sure that I do my best to accommodate. Tomorrow we will start the interview Questions and Answers series, with a few interesting stories, best practices and guest posts. We will have a prize give-away and other awards when the series ends. List of all the Interview Questions and Answers Series blogs Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to launch LOV and Date dialogs using the keyboard

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Using the ADF Faces JavaScript API, developers can listen for user keyboard input in input components to filter or respond to specific characters or key combination. The JavaScript shown below can be used with an af:clientListener tag on af:inputListOfValues or af:inputDate. At runtime, the JavaScript code determines the component type it is executed on and either opens the LOV dialog or the input Date popup.   <af:resource type="javascript">     /**     * function to launch dialog if cursor is in LOV or     * input date field     * @param evt argument to capture the AdfUIInputEvent object     */   function launchPopUpUsingF8(evt) {      var component = evt.getSource();      if (evt.getKeyCode() == AdfKeyStroke.F8_KEY) {      //check for input LOV component        if (component.getTypeName() == 'AdfRichInputListOfValues') {            AdfLaunchPopupEvent.queue(component, true);            //event is handled on the client. Server does not need            //to be notified            evt.cancel();          }         //check for input Date component               else if (component.getTypeName() == 'AdfRichInputDate') {           //the inputDate af:popup component ID always is ::pop           var popupClientId = component.getAbsoluteLocator() + '::pop';           var popup = component.findComponent(popupClientId);           var hints = {align : AdfRichPopup.ALIGN_END_AFTER,                        alignId : component.getAbsoluteLocator()};           popup.show(hints);           //event is handled on the client. Server does not need           //to be notified           evt.cancel();        }              } } </af:resource> The af:clientListener that calls the JavaScript is added as shown below. <af:inputDate label="Label 1" id="id1">    <af:clientListener method="launchPopUpUsingF8" type="keyDown"/> </af:inputDate> As you may have noticed, the call to open the popup is different for the af:inputListOfValues and the af:inputDate. For the list of values component, an ADF Faces AdfLaunchPopupEvent is queued with the LOV component passed s an argument. Launching the input date popup is a bit more complicate and requires you to lookup the implicit popup dialog and to open it manually. Because the popup is opened manually using the show() method on the af:popup component, the alignment of the dialog also needs to be handled manually. For this, the popup component specifies alignment hints, that for the ALIGN_END_AFTER hint aligns the dialog at the end and below the date component. The align Id hint specifies the component the dialog is relatively positioned to, which of course should be the input date field. The ADF Faces JavaScript API and how to use it is further explained in the Using JavaScript in ADF Faces Rich Client Applications whitepaper available from the Oracle Technology Network (OTN) http://www.oracle.com/technetwork/developer-tools/jdev/1-2011-javascript-302460.pdf An ADF Insider recording about JavaScript in ADF Faces can be watched from here http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/adf-insider-javascript/adf-insider-javascript.html

    Read the article

  • Gnome Do not Launching

    - by PyRulez
    When I try running gnome do, I get this. chris@Chris-Ubuntu-Laptop:~$ gnome-do pgrep: invalid user name: -u and it is not writable Trying sudo: chris@Chris-Ubuntu-Laptop:~$ sudo gnome-do [NetworkService] Could not initialize Network Manager dbus: Unable to open the session message bus. [Error 17:54:30.122] [SystemService] Could not initialize dbus: Unable to open the session message bus. (Do:2401): Wnck-CRITICAL **: wnck_set_client_type got called multiple times. (Do:2401): libdo-WARNING **: Binding '<Super>space' failed! [Error 17:54:30.649] [AbstractKeyBindingService] Key "" is already mapped. Tomboy.NotesItemSource "Tomboy Notes" encountered an error in UpdateItems: System.TypeInitializationException: An exception was thrown by the type initializer for Tomboy.TomboyDBus ---> System.Exception: Unable to open the session message bus. ---> System.ArgumentNullException: Argument cannot be null. Parameter name: address at NDesk.DBus.Bus.Open (System.String address) [0x00000] in <filename unknown>:0 at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 at Tomboy.TomboyDBus..cctor () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at Tomboy.NotesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Firefox.PlacesItemSource "Firefox Places" encountered an error in UpdateItems: System.InvalidCastException: Cannot cast from source type to destination type. at Mono.Data.Sqlite.SqliteDataReader.VerifyType (Int32 i, DbType typ) [0x00000] in <filename unknown>:0 at Mono.Data.Sqlite.SqliteDataReader.GetString (Int32 i) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource+<LoadPlaceItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem].AddEnumerable (IEnumerable`1 enumerable) [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem]..ctor (IEnumerable`1 collection) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable.ToArray[PlaceItem] (IEnumerable`1 source) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Do.Universe.Linux.GNOMESpecialLocationsItemSource "GNOME Special Locations" encountered an error in UpdateItems: System.IO.FileNotFoundException: Could not find file "/root/.gtk-bookmarks". File name: '/root/.gtk-bookmarks' at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, Boolean anonymous, FileOptions options) [0x00000] in <filename unknown>:0 at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.FileStream:.ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) at System.IO.File.OpenRead (System.String path) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path, System.Text.Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.StreamReader:.ctor (string) at Do.Universe.Linux.GNOMESpecialLocationsItemSource+<ReadBookmarkItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at Do.Universe.Linux.GNOMESpecialLocationsItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . ^[^\Full thread dump: "<unnamed thread>" tid=0x0xb7570700 this=0x0x56f18 thread handle 0x403 state : not waiting owns () at (wrapper managed-to-native) Mono.Unix.Native.Syscall.read (int,intptr,ulong) <0xffffffff> at Mono.Unix.Native.Syscall.read (int,void*,ulong) <0x00023> at Mono.Unix.UnixStream.Read (byte[],int,int) <0x0008b> at NDesk.DBus.Connection.ReadMessage () <0x0003c> at NDesk.DBus.Connection.Iterate () <0x0001b> at NDesk.DBus.BusG/<Init>c__AnonStorey0.<>m__0 (intptr,NDesk.GLib.IOCondition,intptr) <0x00033> at (wrapper native-to-managed) NDesk.DBus.BusG/<Init>c__AnonStorey0.<>m__0 (intptr,NDesk.GLib.IOCondition,intptr) <0xffffffff> at (wrapper managed-to-native) Gtk.Clipboard.gtk_clipboard_wait_is_text_available (intptr) <0xffffffff> at Gtk.Clipboard.WaitIsTextAvailable () <0x00017> at Do.Universe.SelectedTextItem.UpdateSelection (object,System.EventArgs) <0x00027> at Do.Platform.AbstractApplicationService.OnSummoned () <0x00025> at Do.Platform.ApplicationService.<ApplicationService>m__31 (object,System.EventArgs) <0x00013> at Do.Core.Controller.OnSummoned () <0x00025> at Do.Core.Controller.Summon () <0x00027> at Do.Do.Main (string[]) <0x001eb> at (wrapper runtime-invoke) <Module>.runtime_invoke_void_object (object,intptr,intptr,intptr) <0xffffffff> "<unnamed thread>" tid=0x0xb2c81b40 this=0x0x194150 thread handle 0x412 state : interrupted state owns () at (wrapper managed-to-native) System.IO.InotifyWatcher.ReadFromFD (intptr,byte[],intptr) <0xffffffff> at System.IO.InotifyWatcher.Monitor () <0x0005f> at System.Threading.Thread.StartInternal () <0x00057> at (wrapper runtime-invoke) object.runtime_invoke_void__this__ (object,intptr,intptr,intptr) <0xffffffff> "Universe Update Dispatcher" tid=0x0xb29ffb40 this=0x0x569d8 thread handle 0x41b state : interrupted state owns () at (wrapper managed-to-native) System.Threading.WaitHandle.WaitOne_internal (System.Threading.WaitHandle,intptr,int,bool) <0xffffffff> at System.Threading.WaitHandle.WaitOne (System.TimeSpan,bool) <0x00133> at System.Threading.WaitHandle.WaitOne (System.TimeSpan) <0x00022> at Do.Core.UniverseManager.UniverseUpdateLoop () <0x0007a> at System.Threading.Thread.StartInternal () <0x00057> at (wrapper runtime-invoke) object.runtime_invoke_void__this__ (object,intptr,intptr,intptr) <0xffffffff> Tomboy.NotesItemSource "Tomboy Notes" encountered an error in UpdateItems: System.TypeInitializationException: An exception was thrown by the type initializer for Tomboy.TomboyDBus ---> System.Exception: Unable to open the session message bus. ---> System.ArgumentNullException: Argument cannot be null. Parameter name: address at NDesk.DBus.Bus.Open (System.String address) [0x00000] in <filename unknown>:0 at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at NDesk.DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 at Tomboy.TomboyDBus..cctor () [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at Tomboy.NotesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Firefox.PlacesItemSource "Firefox Places" encountered an error in UpdateItems: System.InvalidCastException: Cannot cast from source type to destination type. at Mono.Data.Sqlite.SqliteDataReader.VerifyType (Int32 i, DbType typ) [0x00000] in <filename unknown>:0 at Mono.Data.Sqlite.SqliteDataReader.GetString (Int32 i) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource+<LoadPlaceItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem].AddEnumerable (IEnumerable`1 enumerable) [0x00000] in <filename unknown>:0 at System.Collections.Generic.List`1[Firefox.PlaceItem]..ctor (IEnumerable`1 collection) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable.ToArray[PlaceItem] (IEnumerable`1 source) [0x00000] in <filename unknown>:0 at Firefox.PlacesItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . Do.Universe.Linux.GNOMESpecialLocationsItemSource "GNOME Special Locations" encountered an error in UpdateItems: System.IO.FileNotFoundException: Could not find file "/root/.gtk-bookmarks". File name: '/root/.gtk-bookmarks' at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, Boolean anonymous, FileOptions options) [0x00000] in <filename unknown>:0 at System.IO.FileStream..ctor (System.String path, FileMode mode, FileAccess access, FileShare share) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.FileStream:.ctor (string,System.IO.FileMode,System.IO.FileAccess,System.IO.FileShare) at System.IO.File.OpenRead (System.String path) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path, System.Text.Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize) [0x00000] in <filename unknown>:0 at System.IO.StreamReader..ctor (System.String path) [0x00000] in <filename unknown>:0 at (wrapper remoting-invoke-with-check) System.IO.StreamReader:.ctor (string) at Do.Universe.Linux.GNOMESpecialLocationsItemSource+<ReadBookmarkItems>c__Iterator3.MoveNext () [0x00000] in <filename unknown>:0 at Do.Universe.Linux.GNOMESpecialLocationsItemSource.UpdateItems () [0x00000] in <filename unknown>:0 at Do.Universe.Safe.SafeItemSource.UpdateItems () [0x00000] in <filename unknown>:0 . It stops when I try my key combination, ctrl-alt-. It does not pop up though.

    Read the article

  • New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    - by Alexandra Georgescu
    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options. Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1). Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled: In order to activate its usage, corresponding statements could be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads: TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ?ENABLE | DISABLE Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed. The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged. TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings. A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available: In Administration Services selecting Replay Transactions on the right-click menu on the database: Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time. Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database: These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed. This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater. Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions. But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory: ARBORPATH/app/appname/dbname/Replay These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is: TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION] While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following: Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions. Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded. You can find more detailed information in the following documents: Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1) Oracle Essbase Online Documentation (rel. 11.1.2.1)) Enterprise Performance Management System Documentation (including previous releases) Or on the Oracle Technology Network. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected]. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis. Disclaimer: All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Read the article

  • RiverTrail - JavaScript GPPGU Data Parallelism

    - by JoshReuben
    Where is WebCL ? The Khronos WebCL working group is working on a JavaScript binding to the OpenCL standard so that HTML 5 compliant browsers can host GPGPU web apps – e.g. for image processing or physics for WebGL games - http://www.khronos.org/webcl/ . While Nokia & Samsung have some protype WebCL APIs, Intel has one-upped them with a higher level of abstraction: RiverTrail. Intro to RiverTrail Intel Labs JavaScript RiverTrail provides GPU accelerated SIMD data-parallelism in web applications via a familiar JavaScript programming paradigm. It extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. With its high-level JS API, programmers do not have to learn a new language or explicitly manage threads, orchestrate shared data synchronization or scheduling. It has been proposed as a draft specification to ECMA a (known as ECMA strawman). RiverTrail runs in all popular browsers (except I.E. of course). To get started, download a prebuilt version https://github.com/downloads/RiverTrail/RiverTrail/rivertrail-0.17.xpi , install Intel's OpenCL SDK http://www.intel.com/go/opencl and try out the interactive River Trail shell http://rivertrail.github.com/interactive For a video overview, see  http://www.youtube.com/watch?v=jueg6zB5XaM . ParallelArray the ParallelArray type is the central component of this API & is a JS object that contains ordered collections of scalars – i.e. multidimensional uniform arrays. A shape property describes the dimensionality and size– e.g. a 2D RGBA image will have shape [height, width, 4]. ParallelArrays are immutable & fluent – they are manipulated by invoking methods on them which produce new ParallelArray objects. ParallelArray supports several constructors over arrays, functions & even the canvas. // Create an empty Parallel Array var pa = new ParallelArray(); // pa0 = <>   // Create a ParallelArray out of a nested JS array. // Note that the inner arrays are also ParallelArrays var pa = new ParallelArray([ [0,1], [2,3], [4,5] ]); // pa1 = <<0,1>, <2,3>, <4.5>>   // Create a two-dimensional ParallelArray with shape [3, 2] using the comprehension constructor var pa = new ParallelArray([3, 2], function(iv){return iv[0] * iv[1];}); // pa7 = <<0,0>, <0,1>, <0,2>>   // Create a ParallelArray from canvas.  This creates a PA with shape [w, h, 4], var pa = new ParallelArray(canvas); // pa8 = CanvasPixelArray   ParallelArray exposes fluent API functions that take an elemental JS function for data manipulation: map, combine, scan, filter, and scatter that return a new ParallelArray. Other functions are scalar - reduce  returns a scalar value & get returns the value located at a given index. The onus is on the developer to ensure that the elemental function does not defeat data parallelization optimization (avoid global var manipulation, recursion). For reduce & scan, order is not guaranteed - the onus is on the dev to provide an elemental function that is commutative and associative so that scan will be deterministic – E.g. Sum is associative, but Avg is not. map Applies a provided elemental function to each element of the source array and stores the result in the corresponding position in the result array. The map method is shape preserving & index free - can not inspect neighboring values. // Adding one to each element. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.map(function inc(v) {     return v+1; }); //<2,3,4,5,6> combine Combine is similar to map, except an index is provided. This allows elemental functions to access elements from the source array relative to the one at the current index position. While the map method operates on the outermost dimension only, combine, can choose how deep to traverse - it provides a depth argument to specify the number of dimensions it iterates over. The elemental function of combine accesses the source array & the current index within it - element is computed by calling the get method of the source ParallelArray object with index i as argument. It requires more code but is more expressive. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.combine(function inc(i) { return this.get(i)+1; }); reduce reduces the elements from an array to a single scalar result – e.g. Sum. // Calculate the sum of the elements var source = new ParallelArray([1,2,3,4,5]); var sum = source.reduce(function plus(a,b) { return a+b; }); scan Like reduce, but stores the intermediate results – return a ParallelArray whose ith elements is the results of using the elemental function to reduce the elements between 0 and I in the original ParallelArray. // do a partial sum var source = new ParallelArray([1,2,3,4,5]); var psum = source.scan(function plus(a,b) { return a+b; }); //<1, 3, 6, 10, 15> scatter a reordering function - specify for a certain source index where it should be stored in the result array. An optional conflict function can prevent an exception if two source values are assigned the same position of the result: var source = new ParallelArray([1,2,3,4,5]); var reorder = source.scatter([4,0,3,1,2]); // <2, 4, 5, 3, 1> // if there is a conflict use the max. use 33 as a default value. var reorder = source.scatter([4,0,3,4,2], 33, function max(a, b) {return a>b?a:b; }); //<2, 33, 5, 3, 4> filter // filter out values that are not even var source = new ParallelArray([1,2,3,4,5]); var even = source.filter(function even(iv) { return (this.get(iv) % 2) == 0; }); // <2,4> Flatten used to collapse the outer dimensions of an array into a single dimension. pa = new ParallelArray([ [1,2], [3,4] ]); // <<1,2>,<3,4>> pa.flatten(); // <1,2,3,4> Partition used to restore the original shape of the array. var pa = new ParallelArray([1,2,3,4]); // <1,2,3,4> pa.partition(2); // <<1,2>,<3,4>> Get return value found at the indices or undefined if no such value exists. var pa = new ParallelArray([0,1,2,3,4], [10,11,12,13,14], [20,21,22,23,24]) pa.get([1,1]); // 11 pa.get([1]); // <10,11,12,13,14>

    Read the article

  • Exception Handling And Other Contentious Political Topics

    - by Justin Jones
    So about three years ago, around the time of my last blog post, I promised a friend I would write this post. Keeping promises is a good thing, and this is my first step towards easing back into regular blogging. I fully expect him to return from Pennsylvania to buy me a beer over this. However, it’s been an… ahem… eventful three years or so, and blogging, unfortunately, got pushed to the back burner on my priority list, along with a few other career minded activities. Now that the personal drama of the past three years is more or less resolved, it’s time to put a few things back on the front burner. What I consider to be proper exception handling practices is relatively well known these days. There are plenty of blog posts out there already on this topic which more or less echo my opinions on this topic. I’ll try to include a few links at the bottom of the post. Several years ago I had an argument with a co-worker who posited that exceptions should be caught at every level and logged. This might seem like sanity on the surface, but the resulting error log looked something like this: Error: System.SomeException Followed by small stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace. Error: System.SomeException Followed by slightly bigger stack trace.   These were all the same exception. The problem with this approach is that the error log, if you run any kind of analytics on in, becomes skewed depending on how far up the stack trace your exception was thrown. To mitigate this problem, we came up with the concept of the “PreLoggedException”. Basically, we would log the exception at the very top level and subsequently throw the exception back up the stack encapsulated in this pre-logged type, which our logging system knew to ignore. Now the error log looked like this: Error: System.SomeException Followed by small stack trace. Much cleaner, right? Well, there’s still a problem. When your exception happens in production and you go about trying to figure out what happened, you’ve lost more or less all context for where and how this exception was thrown, because all you really know is what method it was thrown in, but really nothing about who was calling the method or why. What gives you this clue is the entire stack trace, which we’re losing here. I believe that was further mitigated by having the logging system pull a system stack trace and add it to the log entry, but what you’re actually getting is the stack for how you got to the logging code. You’re still losing context about the actual error. Not to mention you’re executing a whole slew of catch blocks which are sloooooooowwwww……… In other words, we started with a bad idea and kept band-aiding it until it didn’t suck quite so bad. When I argued for not catching exceptions at every level but rather catching them following a certain set of rules, my co-worker warned me “do yourself a favor, never express that view in any future interviews.” I suppose this is my ultimate dismissal of that advice, but I’m not too worried. My approach for exception handling follows three basic rules: Only catch an exception if 1. You can do something about it. 2. You can add useful information to it. 3. You’re at an application boundary. Here’s what that means: 1. Only catch an exception if you can do something about it. We’ll start with a trivial example of a login system that uses a file. Please, never actually do this in production code, it’s just concocted example. So if our code goes to open a file and the file isn’t there, we get a FileNotFound exception. If the calling code doesn’t know what to do with this, it should bubble up. However, if we know how to create the file from scratch we can create the file and continue on our merry way. When you run into situations like this though, What should really run through your head is “How can I avoid handling an exception at all?” In this case, it’s a trivial matter to simply check for the existence of the file before trying to open it. If we detect that the file isn’t there, we can accomplish the same thing without having to handle in in a catch block. 2. Only catch an exception if you can do something about it. Continuing with the poorly thought out file based login system we contrived in part 1, if the code calls a Login(…) method and the FileNotFound exception is thrown higher up the stack, the code that calls Login must account for a FileNotFound exception. This is kind of counterintuitive because the calling code should not need to know the internals of the Login method, and the data file is an implementation detail. What makes more sense, assuming that we didn’t implement any of the good advice from step 1, is for Login to catch the FileNotFound exception and wrap it in a new exception. For argument’s sake we’ll say LoginSystemFailureException. (Sorry, couldn’t think of anything better at the moment.) This gives us two stack traces, preserving the original stack trace in the inner exception, and also is much more informative to the calling code. 3. Only catch an exception if you’re at an application boundary. At some point we have to catch all the exceptions, even the ones we don’t know what to do with. WinForms, ASP.Net, and most other UI technologies have some kind of built in mechanism for catching unhandled exceptions without fatally terminating the application. It’s still a good idea to somehow gracefully exit the application in this case if possible though, because you can no longer be sure what state your application is in, but nothing annoys a user more than an application just exploding. These unhandled exceptions need to be logged, and this is a good place to catch them. Ideally you never want this option to be exercised, but code as though it will be. When you log these exceptions, give them a “Fatal” status (e.g. Log4Net) and make sure these bugs get handled in your next release. That’s it in a nutshell. If you do it right each exception will only get logged once and with the largest stack trace possible which will make those 2am emergency severity 1 debugging sessions much shorter and less frustrating. Here’s a few people who also have interesting things to say on this topic:  http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx http://www.codeproject.com/Articles/9538/Exception-Handling-Best-Practices-in-NET I know there’s more but I can’t find them at the moment.

    Read the article

  • Ubuntu 12.04 + Wifi not working

    - by user171154
    i'm having problems connecting over wireless. At the moment, I'm using wicd. It seems to get stuck on "Verifying AP association...". Without wicd I can get the connection up and ping the Net - but if I take eth0 down (ifconfig eth0 down), my wireless goes away too (same result if I unplug the wire instead). wicd is the only way I can bring eth0 back (which is the main reason I'm using it) - ifconfig eth0 and/or ifup eth0 do not re-enable the connection (I just discovered it leaves out the gateway. Adding the gateway back in re-enables the connection including wifi; I didn't want to delete the info about wicd above in case it gives someone an idea.) Doing it manually, despite the errors (which it would be nice to also resolve) - allows me to ping the outside world: ifup wlan0 ioctl[SIOCSIWENCODEEXT]: Invalid argument ioctl[SIOCSIWENCODEEXT]: Invalid argument ssh stop/waiting ssh start/running, process 17336 ping -I wlan0 -c 4 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.0.12 wlan0: 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_req=1 ttl=43 time=48.8 ms 64 bytes from 8.8.8.8: icmp_req=2 ttl=43 time=47.9 ms 64 bytes from 8.8.8.8: icmp_req=3 ttl=43 time=48.7 ms 64 bytes from 8.8.8.8: icmp_req=4 ttl=43 time=53.2 ms --- 8.8.8.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 47.975/49.711/53.235/2.063 ms # iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"TPLINK" Mode:Managed Frequency:2.427 GHz Access Point: 64:66:xx:xx:xx:22 Bit Rate=108 Mb/s Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=70/70 Signal level=-39 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:3 Missed beacon:0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 01 serial: f0:7d:68:c1:b4:13 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-67-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:dfbf0000-dfbfffff ip route default via 192.168.0.1 dev eth0 default via 192.168.0.1 dev wlan0 metric 100 169.254.0.0/16 dev wlan0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.102 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.12 (For the record, I have no idea what the 169.254.0.0 address is doing there.) uname -a 3.2.0-67-generic-pae #101-Ubuntu SMP Tue Jul 15 18:04:54 UTC 2014 i686 i686 i386 GNU/Linux lshw -C network *-network description: Ethernet interface product: NetXtreme BCM5751 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 01 serial: 00:11:11:59:fc:09 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5751-v3.23a ip=192.168.0.102 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:16 memory:dfcf0000-dfcfffff *-network description: Wireless interface product: AR5418 Wireless Network Adapter [AR5008E 802.11(a)bgn] (PCI-Express) vendor: Qualcomm Atheros physical id: 0 /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback source /etc/network/interfaces.eth0 source /etc/network/interfaces.wlan0 /etc/network/interfaces.eth0 #Main Interface auto eth0 iface eth0 inet static address 192.168.0.102 netmask 255.255.255.0 gateway 192.168.0.1 /etc/network/interfaces.wlan0 auto wlan0 iface wlan0 inet static address 192.168.0.12 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 netmask 255.255.255.0 wpa-driver wext wpa-ssid TPLINK wpa-ap-scan 1 wpa-proto RSN wpa-pairwise CCMP wpa-group CCMP wpa-key-mgmt WPA-PSK wpa-psk dca1badb5fd4e9axxx4xxdaaxxfa91xx610bxx6a7d57ef67af9809dxx6af42e39 /etc/wpa_supplicant.conf ctrl_interface=/var/run/wpa_supplicant network={ ssid="TPLINK" psk="my password" key_mgmt=WPA-PSK proto=RSN pairwise=CCMP group=CCMP } ifdown eth0 ifdown: interface eth0 not configured ifconfig eth0 Link encap:Ethernet HWaddr 00:11:xx:xx:xx:09 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:11ff:fe59:fc09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:213690 errors:0 dropped:0 overruns:0 frame:0 TX packets:155266 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:220057808 (220.0 MB) TX bytes:21137696 (21.1 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:196412 errors:0 dropped:0 overruns:0 frame:0 TX packets:196412 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:153270697 (153.2 MB) TX bytes:153270697 (153.2 MB) wlan0 Link encap:Ethernet HWaddr f0:7d:xx:xx:xx:13 inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f27d:68ff:fec1:b413/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11335 errors:0 dropped:0 overruns:0 frame:0 TX packets:7287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2563290 (2.5 MB) TX bytes:855746 (855.7 KB) ifconfig eth0 down ifconfig eth0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:09 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:11ff:fe59:fc09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:192 (192.0 B) TX bytes:94 (94.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:196418 errors:0 dropped:0 overruns:0 frame:0 TX packets:196418 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:153270871 (153.2 MB) TX bytes:153270871 (153.2 MB) wlan0 Link encap:Ethernet HWaddr f0:7d:xx:xx:xx:13 inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f27d:68ff:fec1:b413/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11359 errors:0 dropped:0 overruns:0 frame:0 TX packets:7293 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2565482 (2.5 MB) TX bytes:856363 (856.3 KB) ip route default via 192.168.0.1 dev wlan0 metric 100 169.254.0.0/16 dev wlan0 scope link metric 1000 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.12 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.102 ping -I wlan0 -c 4 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.0.12 wlan0: 56(84) bytes of data. --- 8.8.8.8 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3024ms ping -I eth0 -c 3 router PING router (192.168.0.1) from 192.168.0.102 eth0: 56(84) bytes of data. --- router ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2015ms ping -I wlan0 -c 3 router PING router (192.168.0.1) from 192.168.0.12 wlan0: 56(84) bytes of data. --- router ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms Let me know if you need more info. Thank you in advance.

    Read the article

  • Hosting and consuming WCF services without configuration files

    - by martinsj
    In this post, I'll demonstrate how to configure both the host and the client in code without the need for configuring services i the <system.serviceModel> section of the config-file. In fact, you don't need a  <system.serviceModel> section at all. What you'll do need (and want) sometimes, is the Uri of the service in the configuration file. Configuring the Uri of the the service is actually only needed for the client or when self-hosting, not when hosting in IIS. So, exactly What do we need to configure? The binding type and the binding constraints The metadata behavior Debug behavior You can of course configure even more, and even more if you want to, WCF is after all the king of configuration… As an example I'll be hosting and consuming a service that removes most of the default constraints for WCF-services, using a BasicHttpBinding. Of course, in regards to security, it is probably better to have some constraints on the server, but this is only a demonstration. The ServerConfig class in the code beneath is a static helper class that will be used in the examples. In this post, I’ll be using this helper-class for all configuration, for both the server and the client. In WCF, the  client and the server have both their own WCF-configuration. With this piece of code, they will be sharing the same configuration. 1: public static class ServiceConfig 2: { 3: public static Binding DefaultBinding 4: { 5: get 6: { 7: var binding = new BasicHttpBinding(); 8: Configure(binding); 9: return binding; 10: } 11: } 12:  13: public static void Configure(HttpBindingBase binding) 14: { 15: if (binding == null) 16: { 17: throw new ArgumentException("Argument 'binding' cannot be null. Cannot configure binding."); 18: } 19:  20: binding.SendTimeout = new TimeSpan(0, 0, 30, 0); // 30 minute timeout 21: binding.MaxBufferSize = Int32.MaxValue; 22: binding.MaxBufferPoolSize = 2147483647; 23: binding.MaxReceivedMessageSize = Int32.MaxValue; 24: binding.ReaderQuotas.MaxArrayLength = Int32.MaxValue; 25: binding.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue; 26: binding.ReaderQuotas.MaxDepth = Int32.MaxValue; 27: binding.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue; 28: binding.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; 29: } 30:  31: public static ServiceMetadataBehavior ServiceMetadataBehavior 32: { 33: get 34: { 35: return new ServiceMetadataBehavior 36: { 37: HttpGetEnabled = true, 38: MetadataExporter = {PolicyVersion = PolicyVersion.Policy15} 39: }; 40: } 41: } 42:  43: public static ServiceDebugBehavior ServiceDebugBehavior 44: { 45: get 46: { 47: var smb = new ServiceDebugBehavior(); 48: Configure(smb); 49: return smb; 50: } 51: } 52:  53:  54: public static void Configure(ServiceDebugBehavior behavior) 55: { 56: if (behavior == null) 57: { 58: throw new ArgumentException("Argument 'behavior' cannot be null. Cannot configure debug behavior."); 59: } 60: 61: behavior.IncludeExceptionDetailInFaults = true; 62: } 63: } Configuring the server There are basically two ways to host a WCF service, in IIS and self-hosting. When hosting a WCF service in a production environment using SOA architecture, you'll be most likely hosting it in IIS. When testing the service in integration tests, it's very handy to be able to self-host services in the unit-tests. In fact, you can share the the WCF configuration for self-hosted services and services hosted in IIS. And that is exactly what you want to do, testing the same configurations for test and production environments.   Configuring when Self-hosting When self-hosting, in order to start the service, you'll have to instantiate the ServiceHost class, configure the  service and open it. 1: // Create the service-host. 2: var host = new ServiceHost(typeof(MyService), endpoint); 3:  4: // Configure the binding 5: host.AddServiceEndpoint(typeof(IMyService), ServiceConfig.DefaultBinding, endpoint); 6:  7: // Configure metadata behavior 8: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 9:  10: // Configure debgug behavior 11: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 12: 13: // Start listening to the service 14: host.Open(); 15:  Configuring when hosting in IIS When you create a WCF service application with the wizard in Visual Studio, you'll end up with bits and pieces of code in order to get the service running: Svc-file with codebehind. A interface to the service Web.config In order to get rid of the configuration in the <system.serviceModel> section, which the wizard has generated for us, we must tell the service that we have a factory that will create the service for us. We do this by changing the markup for the svc-file: 1: <%@ ServiceHost Language="C#" Debug="true" Service="Namespace.MyService" Factory="Namespace.ServiceHostFactory" %> The markup tells IIS that we have a factory called ServiceHostFactory for this service. The service factory has a method we can override which will be called when someone asks IIS for the service. There are overloads we can override: 1: System.ServiceModel.ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses) 2: System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 3:  In this example, we'll be using the last one, so our implementation looks like this: 1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3:  4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12:  1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3: 4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12: As you can see, we are using the same configuration helper we used when self-hosting. Now, when you have a factory, the <system.serviceModel> section of the configuration can be removed, because the section will be ignored when the service has a custom factory. If you want to configure something else in the config-file, one could configure in some other section.   Configuring the client Microsoft has helpfully created a ChannelFactory class in order to create a proxy client. When using this approach, you don't have generate those awfull proxy classes for the client. If you share the contracts with the server in it's own assembly like in the layer diagram under, you can share the same piece of code. The contracts in WCF are the interface to the service and if any, the datacontracts (custom types) the service depends on. Using the ChannelFactory with our configuration helper-class is very simple: 1: var identity = EndpointIdentity.CreateDnsIdentity("localhost"); 2: var endpointAddress = new EndpointAddress(endPoint, identity); 3: var factory = new ChannelFactory<IMyService>(DeployServiceConfig.DefaultBinding, endpointAddress); 4: using (var myService = new factory.CreateChannel()) 5: { 6: myService.Hello(); 7: } 8: factory.Close();   Happy configuration!

    Read the article

  • Recursion in the form of a Recursive Func&lt;T, T&gt;

    - by ToStringTheory
    I gotta admit, I am kind of surprised that I didn’t realize I could do this sooner.  I recently had a problem which required a recursive function call to come up with the answer.  After some time messing around with a recursive method, and creating an API that I was not happy with, I was able to create an API that I enjoy, and seems intuitive. Introduction To bring it to a simple example, consider the summation to n: A mathematically identical formula is: In a .NET function, this can be represented by a function: Func<int, int> summation = x => x*(x+1)/2 Calling summation with an input integer will yield the summation to that number: var sum10 = summation(4); //sum10 would be equal to 10 But what if I wanted to get a second level summation…  First some to n, and then use that argument as the input to the same function, to find the second level summation: So as an easy example, calculate the summation to 3, which yields 6.  Then calculate the summation to 6 which yields 21. Represented as a mathematical formula - So what if I wanted to represent this as .NET functions.  I can always do: //using the summation formula from above var sum3 = summation(3); //sets sum3 to 6 var sum3_2 = summation(sum3); //sets sum3 to 21 I could always create a while loop to perform the calculations too: Func<int, int> summation = x => x*(x+1)/2; //for the interests of a smaller example, using shorthand int sumResultTo = 3; int level = 2; while(level-- > 0) { sumResultTo = summation(sumResultTo); } //sumResultTo is equal to 21 now. Or express it as a for-loop, method calls, etc…  I really didn’t like any of the options that I tried.  Then it dawned on me – since I was using a Func<T, T> anyways, why not use the Func’s output from one call as the input as another directly. Some Code So, I decided that I wanted a recursion class.  Something that I would be generic and reusable in case I ever wanted to do something like this again. It is limited to only the Func<T1, T2> level of Func, and T1 must be the same as T2. The first thing in this class is a private field for the function: private readonly Func<T, T> _functionToRecurse; So, I since I want the function to be unchangeable, I have defined it as readonly.  Therefore my constructor looks like: public Recursion(Func<T, T> functionToRecurse) { if (functionToRecurse == null) { throw new ArgumentNullException("functionToRecurse", "The function to recurse can not be null"); } _functionToRecurse = functionToRecurse; } Simple enough.  If you have any questions, feel free to post them in the comments, and I will be sure to answer them. Next, I want enough. If be able to get the result of a function dependent on how many levels of recursion: private Func<T, T> GetXLevel(int level) { if (level < 1) { throw new ArgumentOutOfRangeException("level", level, "The level of recursion must be greater than 0"); } if (level == 1) return _functionToRecurse; return _GetXLevel(level - 1, _functionToRecurse); } So, if you pass in 1 for the level, you get just the Func<T,T> back.  If you say that you want to go deeper down the rabbit hole, it calls a method which accepts the level it is at, and the function which it needs to use to recurse further: private Func<T, T> _GetXLevel(int level, Func<T, T> prevFunc) { if (level == 1) return y => prevFunc(_functionToRecurse(y)); return _GetXLevel(level - 1, y => prevFunc(_functionToRecurse(y))); } That is really all that is needed for this class. If I exposed the GetXLevel function publicly, I could use that to get the function for a level, and pass in the argument..  But I wanted something better.  So, I used the ‘this’ array operator for the class: public Func<T,T> this[int level] { get { if (level < 1) { throw new ArgumentOutOfRangeException("level", level, "The level of recursion must be greater than 0"); } return this.GetXLevel(level); } } So, using the same example above of finding the second recursion of the summation of 3: var summator = new Recursion<int>(x => (x * (x + 1)) / 2); var sum_3_level2 = summator[2](3); //yields 21 You can even find just store the delegate to the second level summation, and use it multiple times: var summator = new Recursion<int>(x => (x * (x + 1)) / 2); var sum_level2 = summator[2]; var sum_3_level2 = sum_level2(3); //yields 21 var sum_4_level2 = sum_level2(4); //yields 55 var sum_5_level2 = sum_level2(5); //yields 120 Full Code Don’t think I was just going to hold off on the full file together and make you do the hard work…  Copy this into a new class file: public class Recursion<T> { private readonly Func<T, T> _functionToRecurse; public Recursion(Func<T, T> functionToRecurse) { if (functionToRecurse == null) { throw new ArgumentNullException("functionToRecurse", "The function to recurse can not be null"); } _functionToRecurse = functionToRecurse; } public Func<T,T> this[int level] { get { if (level < 1) { throw new ArgumentOutOfRangeException("level", level, "The level of recursion must be greater than 0"); } return this.GetXLevel(level); } } private Func<T, T> GetXLevel(int level) { if (level < 1) { throw new ArgumentOutOfRangeException("level", level, "The level of recursion must be greater than 0"); } if (level == 1) return _functionToRecurse; return _GetXLevel(level - 1, _functionToRecurse); } private Func<T, T> _GetXLevel(int level, Func<T, T> prevFunc) { if (level == 1) return y => prevFunc(_functionToRecurse(y)); return _GetXLevel(level - 1, y => prevFunc(_functionToRecurse(y))); } } Conclusion The great thing about this class, is that it can be used with any function with same input/output parameters.  I strived to find an implementation that I found clean and useful, and I finally settled on this.  If you have feedback – good or bad, I would love to hear it!

    Read the article

  • Ubuntu - Upgrade to 10.4 - general error mounting filesystems

    - by JC Denton
    Hello All, Using upgrade manager I upgraded my 8.x LTS installation to 10.4. After rebooting the system failed encountered an error and dropped into the recovery console. It appeared to be a problem caused by ureadahead as described here: http://ubuntuguide.net/howto-fix-ureadahead-problem-after-upgrading-to-ubuntu-10-04. So I renamed ureadahead.conf to ureadahead.moved (after remounting the partition rw). this did not help so I renamed the file back again. After rebooting the following error appears: ureadahead terminated with status 5. udev_monitor_new_from_netlink: error getting socket: Invalid Argument mountall:mountall.c:3204 assertion failed in main: udev_monitor = udev_monitor_new_from_netlink(udev,"udev") init: mountall main process (2532) killed by ABRT signal. General error mounting filesystems How will I get my system to boot again properly? thanks

    Read the article

  • Why the cucumber features keeps running even though it fails?

    - by Millisami
    Its a rails 2.3.5 app. I'm using the rspec and cucumber for testing. When I run autospec, it runs correctly with the warning (Not running features. To run features in autotest, set AUTOFEATURE=true.) as below: [~/rails_apps/automation (campaign)?] ? autospec (Not running features. To run features in autotest, set AUTOFEATURE=true.) (Not running features. To run features in autotest, set AUTOFEATURE=true.) loading autotest/rails_rspec /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/pathname.rb:263: warning: `*' interpreted as argument prefix /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/bin/ruby /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/rspec-1.3.0/bin/spec --autospec /home/millisami/rails_apps/automation/spec/controllers/campaigns_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/board_spec.rb /home/millisami/rails_apps/automation/spec/models/user_spec.rb /home/millisami/rails_apps/automation/spec/models/campaign_spec.rb /home/millisami/rails_apps/automation/spec/controllers/outlets_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/boards_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_type_spec.rb /home/millisami/rails_apps/automation/spec/models/vendor_spec.rb /home/millisami/rails_apps/automation/spec/controllers/brands_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/vendors_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/dashboard_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/brand_spec.rb /home/millisami/rails_apps/automation/spec/helpers/dashboard_helper_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_spec.rb /home/millisami/rails_apps/automation/spec/models/client_spec.rb /home/millisami/rails_apps/automation/spec/controllers/clients_controller_spec.rb -O spec/spec.opts Now, as it suggests, when I run AUTOFEATURE=true autospec, the specs runs and the cuke features as well. But the problem is that it won't stop. It runs the features and runs them again and again in a loop. It doesn't stop after it fails. Is this due to the warning Warning: $KCODE is NONE as shown below?? [~/rails_apps/automation (campaign)?] ? AUTOFEATURE=true autospec loading autotest/cucumber_rails_rspec Warning: $KCODE is NONE. /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/treetop-1.4.5/lib/treetop/ruby_extensions/string.rb:31: warning: method redefined; discarding old indent /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/pathname.rb:263: warning: `*' interpreted as argument prefix /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/core_ext/object/blank.rb:49: warning: method redefined; discarding old blank? /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/bin/ruby /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/rspec-1.3.0/bin/spec --autospec /home/millisami/rails_apps/automation/spec/controllers/campaigns_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/board_spec.rb /home/millisami/rails_apps/automation/spec/models/user_spec.rb /home/millisami/rails_apps/automation/spec/models/campaign_spec.rb /home/millisami/rails_apps/automation/spec/controllers/outlets_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/boards_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_type_spec.rb /home/millisami/rails_apps/automation/spec/models/vendor_spec.rb /home/millisami/rails_apps/automation/spec/controllers/brands_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/vendors_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/dashboard_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/brand_spec.rb /home/millisami/rails_apps/automation/spec/helpers/dashboard_helper_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_spec.rb /home/millisami/rails_apps/automation/spec/models/client_spec.rb /home/millisami/rails_apps/automation/spec/controllers/clients_controller_spec.rb -O spec/spec.opts

    Read the article

  • Dovecot: doveadm move command syntax

    - by vincent
    I want to transfer an email from [email protected] to put it in a folder of another user: [email protected] For that, it seems to me appropriate to use 'doveadm move' command, but I can not get the right result. Can you help me? syntax: doveadm move [-u |-A] [-S ] [user ] my attempts: doveadm move -u [email protected] DIRECTORY_2 mailbox [email protected]:'DIRECTORY_1' all return: Error: Syncing mailbox [email protected]:DOSSIER_1 failed: Mailbox doesn't exist: [email protected]:DOSSIER_1 doveadm -u [email protected] DIRECTORY_2 mailbox sdbox:/complete/path/[email protected]/mailboxes/DIRECTORY_1 all return: Error: Syncing mailbox sdbox:/complete/path/[email protected]/mailboxes/DIRECTORY_1 failed: Mailbox doesn't exist: sdbox:/complete/path/[email protected]/mailboxes/DIRECTORY_1 doveadm move -u [email protected] DIRECTORY_2 mailbox [email protected] 'DIRECTORY_2' all return: Fatal: Unknown argument DIRECTORY_2 some details: directory exists the following command works (in case 'DIRECTORY_1' and 'DIRECTORY_2' are in the [email protected] user): doveadm move -u [email protected] DIRECTORY_2 mailbox 'DIRECTORY_1' ALL my problem is that I can not put another user destination

    Read the article

  • Querying Active Directory in PowerShell from a Windows host that is not a member of the domain

    - by jshin47
    How can I use PowerShell [adsisearcher] to query a domain that I am not a member of? Usually I will do something like this: $myAdsi = [adsisearcher]"" $myAdsi.SearchRoot = [adsi]"LDAP://dc=corp,dc=mycompany,dc=com" $myAdsi.Filter = "objectCategory=computer" $res = $myAdsi.FindAll() If I run this snippet on a host in my domain, I get the expected result. However, if I run this from a computer that has network access to the domain (through a L2L VPN) I get the error: Exception calling "FindAll" with "0" argument(s): "The specified domain either does not exist or could not be contacted. " At line:11 char:33 + $adComputers = $searcher.FindAll <<<< () + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException This is somewhat expected as I have not provided any sort of credentials to [adsisearcher] that would tell it how to authenticate. My question is: how do I let [adsisearcher] know that I want to authenticate against a domain in which I am not a member?

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >