Search Results

Search found 30778 results on 1232 pages for 'private key'.

Page 118/1232 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • How do private BitTorrent trackers monitor how much users upload/download?

    - by Jon-Eric
    From Wikipedia: Most private trackers monitor how much users upload or download, and in most situations, enforce a minimum upload-to-download ratio. How exactly can a tracker figure out how much data was uploaded and downloaded by each user? My understanding is that a BitTorrent tracker is merely a registry of users that are currently downloading/seeding and that peers, once connected, transfer data directly. So I wouldn't think that the tracker would know anything about the amount of data transferred, much less, where it came from.

    Read the article

  • Getting input from keyboard

    - by SAMIR BHOGAYTA
    When you type on the keyboard the keystrokes go to a particular application, the active application. The active application receives the input from the keyboard. This means the application has input focus. There are two events for a key on a keyboard, when the key is pressed and when it is released. No it's not a single event as you might expect if you have no prior programming experience, in shooter games for example when you keep the forward key pressed (KeyDown) the player goes forward, and when it isn't pressed (KeyUp) the player stays put. The event that occurs when the key is pressed is called KeyPress. It occurs between KeyDown and KeyUp, and therefore acts similar to KeyDown. Similar to the way we handle OnPaint and other events we also handle the OnKeyDown event (because we want the event to occur when the key is pressed and not when it is released) by overriding it. Try the code below and test it. You will understand the role of each property. protected override void OnKeyDown(KeyEventArgs keyEvent) { // Gets the key code lblKeyCode.Text = "KeyCode: " + keyEvent.KeyCode.ToString(); // Gets the key data; recognizes combination of keys lblKeyData.Text = "KeyData: " + keyEvent.KeyData.ToString(); // Integer representation of KeyData lblKeyValue.Text = "KeyValue: " + keyEvent.KeyValue.ToString(); // Returns true if Alt is pressed lblAlt.Text = "Alt: " + keyEvent.Alt.ToString(); // Returns true if Ctrl is pressed lblCtrl.Text = "Ctrl: " + keyEvent.Control.ToString(); // Returns true if Shift is pressed lblShift.Text = "Shift: " + keyEvent.Shift.ToString(); } How do I find out when the user presses a specific key? As you probably imagine, this will be easily accomplished using 'if'. if (keyEvent.KeyCode == Keys.A) { MessageBox.Show("'A' was pressed."); } Probably most beginners would be tempted to do this: if (keyEvent.KeyCode == "A") .... which is definitely incorrect because we can't compare System.Windows.Forms.Keys to a string. Also note that in the example we are using 'keyEvent.KeyCode', that means that even if we have other shift keys pressed (Alt, Ctrl, Shift, Windows...) simultaneous with A, the if condition returns true because it doesn't recognize key combinations. If we want to ignore key combinations (Alt+A, Ctrl+Shift+A), etc. we need to use 'keyEvent.KeyData' of course: if (keyEvent.KeyData == Keys.A) { MessageBox.Show("'A', and only A, was pressed."); } When you right click on a file in Windows Explorer and you have the Shift key pressed you get the additional 'Open with...' item in the menu. This and many others are cases when you need to use the mouse button together with the keyboard. The following code will change the background color of the form only if the form is clicked while the Ctrl key on the keyboard is pressed. If the Ctrl key is unpressed and the form is clicked nothing happens. private void Form1_Click(object sender, System.EventArgs e) { Keys modKey = Control.ModifierKeys; if(modKey == Keys.Control) { this.BackColor = Color.Yellow; } } If you have further questions feel free to ask them and also check the following pages at MSDN: KeyUp Event KeyPress Event KeyDown Event

    Read the article

  • How do I get my mac to boot from an Ubuntu USB key?

    - by Vinay Gupta
    so if you select "mac" and "usb" on this download page, it gives a series of command line instructions to make a USB key which the MacBook will boot into Ubuntu from. http://www.ubuntu.com/desktop/get-ubuntu/download I've followed them to the letter two or three times on different USB keys, and it doesn't work. There's a very great deal of technical discussion about EFI etc. but this set of instructions seems to suggest it should Just Work, and it doesn't. Help? I'm increasingly unhappy with the more locked-down approach Apple is taking, and I'd quite like to start using Linux with a view to transitioning over to using it as my main operating system, but booting from the CD takes forever, runs slowly and I'm really hoping to get it moving off USB. Can anybody help me?

    Read the article

  • Investigating Strategies For Functional Decomposition

    - by Liam McLennan
    Introducing Functional Decomposition Before I begin I must apologise. I think I am using the term ‘functional decomposition’ loosely, and probably incorrectly. For the purpose of this article I use functional decomposition to mean the recursive splitting of a large problem into increasingly smaller ones, so that the one large problem may be solved by solving a set of smaller problems. The justification for functional decomposition is that the decomposed problem is more easily solved. As software developers we recognise that the smaller pieces are more easily tested, since they do less and are more cohesive. Functional decomposition is important to all scientific pursuits. Once we understand natural selection we can start to look for humanities ancestral species, once we understand the big bang we can trace our expanding universe back to its origin. Isaac Newton acknowledged the compositional nature of his scientific achievements: If I have seen further than others, it is by standing upon the shoulders of giants   The Two Strategies For Functional Decomposition of Computer Programs Private Methods When I was working on my undergraduate degree I was taught to functionally decompose problems by using private methods. Consider the problem of painting a house. The obvious solution is to solve the problem as a single unit: public void PaintAHouse() { // all the things required to paint a house ... } We decompose the problem by breaking it into parts: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { // everything required to paint the undercoat } private void PaintTopcoat() { // everything required to paint the topcoat } The problem can be recursively decomposed until a sufficiently granular level of detail is reached: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { prepareSurface(); fetchUndercoat(); paintUndercoat(); } private void PaintTopcoat() { fetchPaint(); paintTopcoat(); } According to Wikipedia, at least one computer programmer has referred to this process as “the art of subroutining”. The practical issues that I have encountered when using private methods for decomposition are: To preserve the top level API all of the steps must be private. This means that they can’t easily be tested. The private methods often have little cohesion except that they form part of the same solution. Decomposing to Classes The alternative is to decompose large problems into multiple classes, effectively using a class instead of each private method. The API delegates to related classes, so the API is not polluted by the sub-steps of the problem, and the steps can be easily tested because they are each in their own highly cohesive class. Additionally, I think that this technique facilitates better adherence to the Single Responsibility Principle, since each class can be decomposed until it has precisely one responsibility. Revisiting my previous example using class composition: public class HousePainter { private undercoatPainter = new UndercoatPainter(); private topcoatPainter = new TopcoatPainter(); public void PaintAHouse() { undercoatPainter.Paint(); topcoatPainter.Paint(); } } Summary When decomposing a problem there is more than one way to represent the sub-problems. Using private methods keeps the logic in one place and prevents a proliferation of classes (thereby following the four rules of simple design) but the class decomposition is more easily testable and more compatible with the Single Responsibility Principle.

    Read the article

  • Hide GRUB2 menu UNLESS you hold down Shift key: how to make this happen?

    - by Shasteriskt
    I have a Ubuntu - Windows 7 dual-boot set-up, and I would like to have it that my laptop would boot up Windows 7 unless I press down the Shift key right after boot and bring up the Grub2 menu from which I can choose Ubuntu. I researched on Grub2 and options for etc/default/grub and I have tried playing around with combinations with the GRUB_TIMEOUT and GRUB_HIDDEN_TIMEOUT values, but to no avail. I tried setting the GRUB_HIDDEN_TIMEOUT higher than the GRUB_TIMEOUT thinking that both countdown start simultaneously, but no - GRUB_TIMEOUT only starts after the other is done. Is this behavior achievable? If so, how?

    Read the article

  • Is is OK to use a non-primary key as the id in a rails resource?

    - by nPn
    I am getting ready to set up a resource for some new api calls to my rails application. I am planning on calling the resource devices ie resources :devices This is going to represent a android mobile devices I know this will get me routes such as GET devices/:id In most cases :id would be an integer representing the primary key, and in the controller we would use :id as such: GET devices/1 @device = Device.find(params[:id]) In this case I would like to use :id as the google_cloud_messaging_reg_id So I would like to have requests like this: GET devices/some_long_gcm_id and then in the controller , just us params[:id] to look up the device by the gcm registration id. This seem more natural, since the device will know it's gcm id rather than it's rails integer id. Are there any reasons I should avoid doing this?

    Read the article

  • How do I get my Mac to boot from an Ubuntu USB key?

    - by user11621
    If you select "USB" and "Mac" on this download page, it gives a series of command line instructions to make a USB key which the MacBook will boot into Ubuntu from. I've followed them to the letter two or three times on different USB keys, and it doesn't work. There's a very great deal of technical discussion about EFI etc. but this set of instructions seems to suggest it should just work, but it doesn't. Help? I'm increasingly unhappy with the more locked-down approach Apple is taking, and I'd quite like to start using Linux with a view to transitioning over to using it as my main operating system, but booting from the CD takes forever, runs slowly and I'm really hoping to get it moving off USB. Can anybody help me?

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How do I make this rendering thread run together with the main one?

    - by funk
    I'm developing an Android game and need to show an animation of an exploding bomb. It's a spritesheet with 1 row and 13 different images. Each image should be displayed in sequence, 200 ms apart. There is one Thread running for the entire game: package com.android.testgame; import android.graphics.Canvas; public class GameLoopThread extends Thread { static final long FPS = 10; // 10 Frames per Second private final GameView view; private boolean running = false; public GameLoopThread(GameView view) { this.view = view; } public void setRunning(boolean run) { running = run; } @Override public void run() { long ticksPS = 1000 / FPS; long startTime; long sleepTime; while (running) { Canvas c = null; startTime = System.currentTimeMillis(); try { c = view.getHolder().lockCanvas(); synchronized (view.getHolder()) { view.onDraw(c); } } finally { if (c != null) { view.getHolder().unlockCanvasAndPost(c); } } sleepTime = ticksPS - (System.currentTimeMillis() - startTime); try { if (sleepTime > 0) { sleep(sleepTime); } else { sleep(10); } } catch (Exception e) {} } } } As far as I know I would have to create a second Thread for the bomb. package com.android.testgame; import android.graphics.Bitmap; import android.graphics.Canvas; import android.graphics.Rect; public class Bomb { private final Bitmap bmp; private final int width; private final int height; private int currentFrame = 0; private static final int BMPROWS = 1; private static final int BMPCOLUMNS = 13; private int x = 0; private int y = 0; public Bomb(GameView gameView, Bitmap bmp) { this.width = bmp.getWidth() / BMPCOLUMNS; this.height = bmp.getHeight() / BMPROWS; this.bmp = bmp; x = 250; y = 250; } private void update() { currentFrame++; new BombThread().start(); } public void onDraw(Canvas canvas) { update(); int srcX = currentFrame * width; int srcY = height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bmp, src, dst, null); } class BombThread extends Thread { @Override public void run() { try { sleep(200); } catch(InterruptedException e){ } } } } The Threads would then have to run simultaneously. How do I do this?

    Read the article

  • hibernate criteria list problem [migrated]

    - by user1022676
    I have a user dao @Entity @Table(name="EBIGUSERTIM") public class EbigUser { private String id; private Integer source; private String entryscheme; private String fullName; private String email; private Long flags; private String status; private String createdBy; private Date createdStamp; private String modifiedBy; private Date modifiedStamp; @Id @Column(name="ID") public String getId() { return id; } public void setId(String id) { this.id = id; } @Id @Column(name="SOURCE") public Integer getSource() { return source; } public void setSource(Integer source) { this.source = source; } @Column(name="ENTRYSCHEME") public String getEntryscheme() { return entryscheme; } public void setEntryscheme(String entryscheme) { this.entryscheme = entryscheme; } @Column(name="FULLNAME") public String getFullName() { return fullName; } public void setFullName(String fullName) { this.fullName = fullName; } @Column(name="EMAIL") public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } @Column(name="FLAGS") public Long getFlags() { return flags; } public void setFlags(Long flags) { this.flags = flags; } @Column(name="STATUS") public String getStatus() { return status; } public void setStatus(String status) { this.status = status; } @Column(name="CREATEDBY") public String getCreatedBy() { return createdBy; } public void setCreatedBy(String createdBy) { this.createdBy = createdBy; } @Column(name="CREATEDSTAMP") public Date getCreatedStamp() { return createdStamp; } public void setCreatedStamp(Date createdStamp) { this.createdStamp = createdStamp; } @Column(name="MODIFIEDBY") public String getModifiedBy() { return modifiedBy; } public void setModifiedBy(String modifiedBy) { this.modifiedBy = modifiedBy; } @Column(name="MODIFIEDSTAMP") public Date getModifiedStamp() { return modifiedStamp; } public void setModifiedStamp(Date modifiedStamp) { this.modifiedStamp = modifiedStamp; } i am selecting 2 rows out of the db. The sql works select * from ebigusertim where id='blah'. It returns 2 distinct rows. When i query the data using hibernate, it appears that the object memory is not being allocated for each entry in the list. Thus, i get 2 entries in the list with the same object. Criteria userCriteria = session.createCriteria(EbigUser.class); userCriteria.add(Restrictions.eq("id", id)); userlist = userCriteria.list();

    Read the article

  • Specified key is not a valid size for this algorithm...

    - by phenevo
    Hi, I have with this code: RijndaelManaged rijndaelCipher = new RijndaelManaged(); // Set key and IV rijndaelCipher.Key = Convert.FromBase64String("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz012345678912"); rijndaelCipher.IV = Convert.FromBase64String("1234567890123456789012345678901234567890123456789012345678901234"); I get throws : Specified key is not a valid size for this algorithm. Specified initialization vector (IV) does not match the block size for this algorithm. What's wrong with this strings ? Can I count at some examples strings from You ?

    Read the article

  • How to dismiss keyboard for UITextView with return key?

    - by iPhoney
    In IB's library, the introduction tells us that when the return key is pressed, the keyboard for UITextView will disappear. But actually the return key can only act as '\n'. I can add a button and use [txtView resignFirstResponder] to hide the keyboard. But is there a way to add the action for the return key in keyboard so that I needn't add another button.

    Read the article

  • How do I insert into a unique key into a table?

    - by Ben McCormack
    I want to insert data into a table where I don't know the next unique key that I need. I'm not sure how to format my INSERT query so that the value of the Key field is 1 greater than the maximum value for the key in the table. I know this is a hack, but I'm just running a quick test against a database and need to make sure I always send over a Unique key. Here's the SQL I have so far: INSERT INTO [CMS2000].[dbo].[aDataTypesTest] ([KeyFld] ,[Int1]) VALUES ((SELECT Max([KeyFld]) FROM [dbo].[aDataTypesTest]) + 1 ,1) which errors out with: Msg 1046, Level 15, State 1, Line 5 Subqueries are not allowed in this context. Only scalar expressions are allowed. I'm not able to modify the underlying database table. What do I need to do to ensure a unique insert in my INSERT SQL code?

    Read the article

  • Android java.lang.VerifyError for private method with annotated argument.

    - by alex2k8
    I have a very simple project that compiles, but can't be started on Emulator. The problem is with this method: private void bar(@Some String a) {} // java.lang.VerifyError The issue can be avoided if annotation removed private void bar(String a) {} // OK or the method visibility changed: void bar(@Some String a) {} // OK public void bar(@Some String a) {} // OK protected void bar(@Some String a) {} // OK Any idea what is wrong with original method? Is this a dalvik bug, or? If some one whould like to experiment with code, here it is: Test.java: public class Test { private void bar(@Some String a) {} public void foo() { bar(null); } } Some.java: public @interface Some {} MainActivity.java: public class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); new Test().foo(); } } Stack trace: ERROR/dalvikvm(1358): Could not find method com.my.Test.bar, referenced from method com.my.Test.foo WARN/dalvikvm(1358): VFY: unable to resolve direct method 11: Lcom/my/Test;.bar (Ljava/lang/String;)V WARN/dalvikvm(1358): VFY: rejecting opcode 0x70 at 0x0001 WARN/dalvikvm(1358): VFY: rejected Lcom/my/Test;.foo ()V WARN/dalvikvm(1358): Verifier rejected class Lcom/my/Test; DEBUG/AndroidRuntime(1358): Shutting down VM WARN/dalvikvm(1358): threadid=3: thread exiting with uncaught exception (group=0x4000fe70) ERROR/AndroidRuntime(1358): Uncaught handler: thread main exiting due to uncaught exception ERROR/AndroidRuntime(1358): java.lang.VerifyError: com.my.Test ERROR/AndroidRuntime(1358): at com.my.MainActivity.onCreate(MainActivity.java:13) ERROR/AndroidRuntime(1358): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2231) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2284) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.access$1800(ActivityThread.java:112) ERROR/AndroidRuntime(1358): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1692) ERROR/AndroidRuntime(1358): at android.os.Handler.dispatchMessage(Handler.java:99) ERROR/AndroidRuntime(1358): at android.os.Looper.loop(Looper.java:123) ERROR/AndroidRuntime(1358): at android.app.ActivityThread.main(ActivityThread.java:3948) ERROR/AndroidRuntime(1358): at java.lang.reflect.Method.invokeNative(Native Method) ERROR/AndroidRuntime(1358): at java.lang.reflect.Method.invoke(Method.java:521) ERROR/AndroidRuntime(1358): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:782) ERROR/AndroidRuntime(1358): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:540) ERROR/AndroidRuntime(1358): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Cannot truncate table because it is being referenced by a FOREIGN KEY constraint?

    - by ctrlShiftBryan
    Using MSSQL2005, Can I truncate a table with a foreign key constraint if I first truncate the child table(the table with the primary key of the FK relationship)? I know I can use a DELETE without a where clause and then RESEED the identity OR Remove the FK, truncate and recreate but I thought as long as you truncate the child table you'll be OK however I'm getting a "Cannot truncate table 'TableName' because it is being referenced by a FOREIGN KEY constraint." error.

    Read the article

  • What key on a keyboard can be detected in the browser but won't show up in a text input?

    - by Brady
    I know this one is going to sound weird but hear me out. I have a heart rate monitor that is hooked up like a keyboard to the computer. Each time the heart rate monitor detects a pulse it sends a key stroke. This keystroke is then detected by a web based game running in the browser. Currently I'm sending the following keystroke: (`) In the browser base game I'm detecting the following key fine and if when the user is on any data input screens the (`) character is ignored using a bit of JavaScript. The problem comes when I leave the browser and go back to using the Operating system in other ways the (`) starts appearing everywhere. So my question is: Is there a key that can be sent via the keyboard that is detectable in the browser but wont have any notable output on the screen if I switch to other applications. I'm kind of looking for a null key. A key that is detectable in the browser but has no effect to the browser or any other system application if pressed.

    Read the article

  • With VIM, use both snipMate and pydiction together (share the <tab> key?)

    - by thornomad
    I am trying to use snipMate and pydiction in vim together - however, both use the <tab> key to perform their genius-auto-completion-snippet-rendering-goodness-that-I-so-desire. When pydiction is installed, snipMate stops working. I assume its because they can't both own the <tab> key. How can I get them to work together? I wouldn't mind mapping one of them to a different key, but I am not really sure how to do this ... (maybe pydiction to the <ctrl-n> key so it mimics vim's autocomplete?). Here is the relevant .vimrc: filetype indent plugin on autocmd FileType python set ft=python.django autocmd FileType html set ft=html.django_template let g:pydiction_location = '~/.vim/ftplugin/pydiction-1.2/complete-dict'

    Read the article

  • 404 when getting private YouTube video even when logged in with the owner's account using gdata-pyth

    - by Gonzalo
    If a YouTube video is set as private and I try to fetch it using the gdata Python API a 404 RequestError is raised, even though I have done a programmatic login with the account that owns that video: from gdata.youtube import service yt_service = service.YouTubeService(email=my_email, password=my_password, client_id=my_client_id, source=my_source, developer_key=my_developer_key) yt_service.ProgrammaticLogin() yt_service.GetYouTubeVideoEntry(video_id='IcVqemzfyYs') --------------------------------------------------------------------------- RequestError Traceback (most recent call last) <ipython console> /usr/lib/python2.4/site-packages/gdata/youtube/service.pyc in GetYouTubeVideoEntry(self, uri, video_id) 203 elif video_id and not uri: 204 uri = '%s/%s' % (YOUTUBE_VIDEO_URI, video_id) --> 205 return self.Get(uri, converter=gdata.youtube.YouTubeVideoEntryFromString) 206 207 def GetYouTubeContactFeed(self, uri=None, username='default'): /usr/lib/python2.4/site-packages/gdata/service.pyc in Get(self, uri, extra_headers, redirects_remaining, encoding, converter) 1100 'body': result_body} 1101 else: -> 1102 raise RequestError, {'status': server_response.status, 1103 'reason': server_response.reason, 'body': result_body} 1104 RequestError: {'status': 404, 'body': 'Video not found', 'reason': 'Not Found'} This happens every time, unless I go into my YouTube account (through the YouTube website) and set it public, after that I can set it as private and back to public using the Python API. Am I missing a step or is there another (or any) way to fetch a YouTube video set as private from the API? Thanks in advance.

    Read the article

  • Using Hadoop, are my reducers guaranteed to get all the records with the same key?

    - by samg
    I'm running a hadoop job (using hive actually) which is supposed to uniq lines in a lot of text file. More specifically it chooses the most recently timestamped record for each key in the reduce step. Does hadoop guarantee that every record with the same key, output by the map step, will go to a single reducer, even if there are many reducers running across a cluster? I'm worried that the mapper output might be split after the shuffle happens, in the middle of a set of records with the same key.

    Read the article

  • ibatis throwing NullPointerException

    - by Prashant P
    i am trying to test ibatis with DB. I get NullPointerException. Below are the class and ibatis bean config, <select id="getByWorkplaceId" parameterClass="java.lang.Integer" resultMap="result"> select * from WorkDetails where workplaceCode=#workplaceCode# </select> <select id="getWorkplace" resultClass="com.ibatis.text.WorkDetails"> select * from WorkDetails </select> POJO public class WorkplaceDetail implements Serializable { private static final long serialVersionUID = -6760386803958725272L; private int code; private String plant; private String compRegNum; private String numOfEmps; private String typeIndst; private String typeProd; private String note1; private String note2; private String note3; private String note4; private String note5; } DAOimplementation public class WorkplaceDetailImpl implements WorkplaceDetailsDAO { private SqlMapClient sqlMapClient; public void setSqlMapClient(SqlMapClient sqlMapClient) { this.sqlMapClient = sqlMapClient; } @Override public WorkplaceDetail getWorkplaceDetail(int code) { WorkplaceDetail workplaceDetail=new WorkplaceDetail(); try{ **workplaceDetail= (WorkplaceDetail) this.sqlMapClient.queryForObject("workplaceDetail.getByWorkplaceId", code);** }catch (SQLException sqlex){ sqlex.printStackTrace(); } return workplaceDetail; } TestCode public class TestDAO { public static void main(String args[]) throws Exception{ WorkplaceDetail wd = new WorkplaceDetail(126, "Hoonkee", "1234", "22", "Service", "Tele", "hsgd","hsgd","hsgd","hsgd","hsgd"); WorkplaceDetailImpl impl= new WorkplaceDetailImpl(); **impl.getWorkplaceDetail(wd.getCode());** impl.saveOrUpdateWorkplaceDetails(wd); System.out.println("dhsd"+impl); } } I want to select and insert. I have marked as ** ** as a point of exception in above code Exception in thread "main" java.lang.NullPointerException at com.ibatis.text.WorkplaceDetailImpl.getWorkplaceDetail(WorkplaceDetailImpl.java:19) at com.ibatis.text.TestDAO.main(TestDAO.java:11)

    Read the article

  • Insert data in an object to a database.

    - by paul
    I am facing the following design/implementation dilemma. I have a class Customer which is below with getters and setters. I would like to insert the value of the Customer into a "Customer" table of a database. But Customer has an address which is of type "Address". How do I go about inserting this field into the database?(I am using sqlite3). I thought of writing a separate table "Address(customerId,doorNo,city,state,country,pinCode)". But I am having second thoughts about generating the primary key(customerId) which should be same for both the "customer" and "Address" table. Sqlite3 faq states that I can do "Integer Primary Key" to use the field to generate an auto number. But if I do that in customer table, I would have to retrieve the same Id to be used in Address table. This kinda looks wrong to me :-?. There should be an elegant method to solve this. Any ideas would be much appreciated. Thanks in advance. import java.io.*; import java.sql.*; class Customer { private String id; private String name; private Address address; private Connection connection; private ResultSet resultSet; private PreparedStatement preparedStatement; public void insertToDatabase(){ } } class Address{ private String doorNumber; private String streetName; private String cityName; private String districtName; private String stateName; private String countryName; private long pinCode; }

    Read the article

  • ssh-rsa public key validation using a regular expression.

    - by Warlax
    What regular expression can I use (if any) to validate that a given string is a legal ssh rsa public key? I only need to validate the actual key - I don't care about the key type the precedes it or the username comment after it. Ideally, someone will also provide the python code to run the regex validation. Thanks.

    Read the article

  • Remove then Query fails in JPA (deleted entity passed to persist)

    - by nag
    I have two entitys MobeeCustomer and CustomerRegion i want to remove the object from CustomerRegion first Im put join Coloumn in CustomerRegion is null then Remove the Object from the entityManager but Iam getting Exception MobeeCustomer: public class MobeeCustomer implements Serialization{ private Long id; private String custName; private String Address; private String phoneNo; private Set<CustomerRegion> customerRegion = new HashSet<CustomerRegion>(0); @OneToMany(cascade = { CascadeType.PERSIST, CascadeType.REMOVE }, fetch = FetchType.LAZY, mappedBy = "mobeeCustomer") public Set<CustomerRegion> getCustomerRegion() { return CustomerRegion; } public void setCustomerRegion(Set<CustomerRegion> customerRegion) { CustomerRegion = customerRegion; } } CustomerRegion public class CustomerRegion implements Serializable{ private Long id; private String custName; private String description; private String createdBy; private Date createdOn; private String updatedBy; private Date updatedOn; private MobeeCustomer mobeeCustomer; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "MOBEE_CUSTOMER") public MobeeCustomer getMobeeCustomer() { return mobeeCustomer; } public void setMobeeCustomer(MobeeCustomer mobeeCustomer) { this.mobeeCustomer = mobeeCustomer; } } sample code: for (CustomerRegion region : deletedRegionList) { region.setMobeeCustomer(null); getEntityManager().remove(region); } StackTrace: please suggest me how to remove the CustomerRegion Object I am getting Exception javax.persistence.EntityNotFoundException: deleted entity passed to persist: [com.manam.mobee.persist.entity.CustomerRegion#<null>] 15:46:34,614 ERROR [STDERR] at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:613) 15:46:34,614 ERROR [STDERR] at org.hibernate.ejb.AbstractEntityManagerImpl.flush(AbstractEntityManagerImpl.java:299) 15:46:34,614 ERROR [STDERR] at org.jboss.seam.persistence.EntityManagerProxy.flush(EntityManagerProxy.java:92) 15:46:34,614 ERROR [STDERR] at org.jboss.seam.framework.EntityHome.update(EntityHome.java:64)

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >