Search Results

Search found 11100 results on 444 pages for 'xt 20'.

Page 63/444 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • merging indexed array in Python

    - by leon
    Suppose that I have two numpy arrays of the form x = [[1,2] [2,4] [3,6] [4,NaN] [5,10]] y = [[0,-5] [1,0] [2,5] [5,20] [6,25]] is there an efficient way to merge them such that I have xmy = [[0, NaN, -5 ] [1, 2, 0 ] [2, 4, 5 ] [3, 6, NaN] [4, NaN, NaN] [5, 10, 20 ] [6, NaN, 25 ] I can implement a simple function using search to find the index but this is not elegant and potentially inefficient for a lot of arrays and large dimensions. Any pointer is appreciated.

    Read the article

  • Haskell Input & Passing Values to Functions

    - by Pradeep
    putStrLn "Enter the Artist Name" art <- getLine putStrLn "Enter the Number of CD's" num <- getLine let test= buyItem currentStockBase art num printListIO (showcurrentList test) the values i have to pass for buyItem is buyItem currentStockBase "Akon" 20 but i want to send "Akon" to art and for 20 i want to send num it gives me this error ERROR file:.\Project2.hs:126 - Type error in application *** Expression : buyItem currentStockBase art num *** Term : num *** Type : [Char] *** Does not match : Int please help me

    Read the article

  • Data types for validation

    - by nevalu
    How to create a new data type which to can check/validate its schema when is created a new variable (of that type)? By example, to validate if a string has 20 characters, I tried: {{{ // Format: 2006-01-12T06:06:06Z func date(str string) { if len(str) != 20 { fmt.Println("error") } } var Date = date() type Account struct { domain string username string created Date } }}} but it faills because Date is not a type.

    Read the article

  • PHP: Remove the first and last item of the array

    - by phpBOY
    Hi, Suppose I have this array: $array = array('10', '20', '30.30', '40', '50'); Questions What is the fastest/easiest way to remove the first item from the above array? What is the fastest/easiest way to remove the last item from the above array? So the resulting array contains only these values: '20' '30.30' '40'

    Read the article

  • help with sql-server query - dates

    - by Gold
    hi i have this table: id|date 1 | 10/11/2009 2 | 13/11/2009 1 | 20/12/2009 3 | 21/12/2009 1 | 30/12/2009 if i stand on the last record (id=1) and i need to see the last date where id=1 is appear -- will show me: 1 | 20/12/2009 what query will do it ? thank's in advance

    Read the article

  • what is the output of this code?

    - by user329820
    Hi,I have wriiten a part of code for you and I want to know the output ,I need your help because there is not any body for helping me also I think that the out put is A ,is this correct? thanks. declare @v1 varchar(20),@v2 varchar(20) select @v1 = 'NULL' if @v1 is null and @v2 is null select 'A' else select 'B'

    Read the article

  • What SQL query should I perform to get the result set expected?

    - by texai
    What SQL query should I perform to get the result set expected, giving the first element of the chain (2) as input data, or any of them ? table name: changes +----+---------------+---------------+ | id | new_record_id | old_record_id | +----+---------------+---------------+ | 1| 4| 2| | -- non relevant data -- | | 6| 7| 4| | -- non relevant data -- | | 11| 13| 7| | 12| 14| 13| | -- non relevant data -- | | 31| 20| 14| +----+---------------+---------------+ Result set expected: +--+ | 2| | 4| | 7| |13| |14| |20| +--+ I know I should consider change my data model, but: What if I couldn't? Thank you in advance!

    Read the article

  • writing into a file in c.

    - by mekasperasky
    i have an char array b[20] which i want to write into a file . After every iteration the value of b[20] changes , so i would like to write the string in each line of the file in each iteration . So how can i change a new line in a file and also put in a array of character?

    Read the article

  • selecting number of rows from resultset

    - by sap
    Suppose a query "select * from employee" returns 80 rows. I need to display middle rows that is from 20th row to 50th row. I know, like to display first 20 rows we have option like "select top 20 * from employee" but if we need middle rows how to get it in MS SQL specifically. I m new to this SQL queries...Can anybody answer to this question.

    Read the article

  • merging in python

    - by Abruzzo Forte e Gentile
    Hi all I have the following 4 arrays ( grouped in 2 groups ) that I would like to merge in ascending order by the keys array. I can use also dictionaries as structure if it is easier. Has python any command or something to make this quickly possible? Regards MN # group 1 [7, 2, 3, 5] #keys [10,11,12,26] #values [0, 4] #keys [20, 33] #values # I would like to have [ 0, 2, 3, 4, 5, 7 ] # ordered keys [20, 11,12,33,26,33] # associated values

    Read the article

  • MySQL select/where statement

    - by expo101
    I have a webapplication linked to a mysql database with the following fields: field 1:trip_id field 2:trip_destination field 3:trip_description field 4:trip_duration In the webapplication I have a listbox based on the following: ListBox value =1: trip duration 1 - 5 days ListBox value =2: trip duration 6 - 10 days Listbox value =3: trip duration 11 -20 days ListBox value =4: trip duration over 20 days How do I put this in the sql select statement?

    Read the article

  • MySQL select column length in php

    - by Patrick
    Hello! How do i get the actual max length of a specified column in php? For instance, this table: id - int(11) name - string(20) I want in php to select the maximum number of characters that a field can have, like SELECT length(name) from table1 and it should then return 20 (since its the maximum number of characters for that field).

    Read the article

  • Side-by-side comparison of data by month in SQL

    - by ScottR
    I have table similar to the following: Year | Product | Value 2006 A 10 2006 B 20 2006 C 30 2007 A 40 2007 B 50 2007 C 60 I would like a query that would return the following comparison Product | 2006 Value | 2007 Value A 10 40 B 20 50 C 30 60 What are the options to do so? Can it be done without joins? I'm working with DB2, but answers in all SQL types would be helpful.

    Read the article

  • 'umfpack.h' not found, but it's in /opt/local/include/

    - by user2924321
    I'm trying to compile a program called hiQlab on OSX 10.8 g++ -g -O2 -I`echo /Users/.../Documents/hiQlab/hiqlab-2006-07-20/tools/`/lua/include -I`echo /Users/.../Documents/hiQlab/hiqlab-2006-07-20/tools/`/tolua++/include -c cscmatrix.cc cscmatrix.cc:13:12: fatal error: 'umfpack.h' file not found #include "umfpack.h" but I just installed SuiteSparse through macports which includes umfpack and umfpack.h is in fact present in default directory /opt/local/include/ do I need to add the path or something? Thanks!

    Read the article

  • SQL more elegant combination of boolean checks possible?

    - by Matze
    Call me pedantic but is there a more elegant way to combine all those checks? SELECT * FROM [TABLE1] WHERE [path] = 'RECEIVE' AND [src_ip] NOT LIKE '10.48.20.10' AND [src_ip] NOT LIKE '0.%' AND [src_ip] NOT LIKE '127.%' ORDER BY [date],[time] DESC; To something like this: SELECT * FROM [TABLE1] WHERE [path] = 'RECEIVE' AND [src_ip] NOT LIKE IN ('10.48.20.10','0.%','127.%', .... ) ORDER BY [date],[time] DESC;

    Read the article

  • sql count() query for tables

    - by air
    i have two tables table1 fields fid,fname,fage a ,abc ,20 b ,bcv ,21 c ,cyx ,19 table2 fields rcno,fid,status 1 ,a ,ok 2 ,c ,ok 3 ,a ,ok 4 ,b ,ok 5 ,a ,ok i want to display rectors like this fid from table1 , count(recno) from table 2 and fage from table1 fid,count(recno),fage a ,3 ,20 b ,2 ,21 c ,1 ,19 i try many sql queries but got error Thanks

    Read the article

  • specific ports in ftp(client)

    - by user158182
    i am using a ftp connection to send data beteen server and client.servers command port is 21 and data port is 20,i want the client port specified by user is that possible, Ftp: command : client >specificport(user defined) --> server --21 data : client >specificport(user defined) --> server --20

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • OpenBSD configuration: Client unable to mount via NFS using Berkeley Automounter (amd)

    - by Rilindo
    What I am trying to do is to have my openBSD client (OpenBSD 4.9) auto mount a Linux NFS file system (Scientific Linux 6.1). So far, I am not sure if it is configured correctly. To get things out of the way, I am able to mount nfs manually: # mount_nfs -T -3 192.168.15.100:/exports /mnt # ls -la /mnt total 52 drwxr-xr-x 7 root wheel 4096 Oct 4 22:42 . drwxr-xr-x 16 root wheel 512 Nov 26 16:33 .. drwxrwxr-x 5 _sndio _sndio 4096 Oct 31 21:58 centos drwxr-xr-x 15 root wheel 4096 Nov 6 09:17 home drwxr-xr-x 5 root wheel 4096 Oct 31 21:27 sl drwxr-xr-x 3 root wheel 4096 Nov 19 16:02 sles drwxr-xr-x 17 503 503 4096 Nov 10 17:37 users # So connectivity is not an issue, as far as I can tell. As per man page, the following is configured in /etc/amd/auto.home: /defaults type:=nfs;sublink:=${key};opts:=rw,soft,intr,vers=3,proto=tcp * rhost:=192.168.15.100;rfs:=/exports In turn, /etc/amd/master is configured as such: # cat /etc/amd/master /exports amd.home Upon reboot, I can it see mount, but curiously enough, instead of the hostname: amd:24490 0 0 0 100% /exports From what I understand, amd acts a little different from FreeBSD. Still, I tried to see if I it can automount. Nope: ksh: cd: /exports/users - Resource temporarily unavailable # cd /exports/192.168.15.100/host/users ksh: cd: /exports/192.168.15.100/host/users - Resource temporarily unavailable A search in google doesn't help too much - it seems that automounting NFS with OpenBSD is not something that is usually done. Other than this, information is fairly sparse. I can, of course, always mount is permanently, but I tend to be a bit anal on convention, so no for now. :) Some direction would be appreciation. (And oh, in case you are a wondering, I tried FreeBSD way of using amd and that hasn't worked out - although I wouldn't mind an explanation of the difference between how FreeBSD implements and how OpenBSD implements it) UPDATE: After re-writing the map file several times, I got as far as actually communicating with the NFS server with this configuration: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,tcp,resvport However, for some reason, it seems that amd will only default to NFS version 2 over udp: # tcpdump dst kerberos tcpdump: listening on pcn0, link-type EN10MB tcpdump: WARNING: compensating for unaligned libpcap packets 20:38:28.558385 openbsd.monzell.com.856 > kerberos.monzell.com.sunrpc: udp 100 20:38:28.559154 openbsd.monzell.com.856 > kerberos.monzell.com.892: udp 96 20:38:30.592761 openbsd.monzell.com.856 > kerberos.monzell.com.nfsd: xid 0x22000000 (NFSv2) 40 null 20:38:33.558107 arp reply openbsd.monzell.com is-at 52:54:00:52:8f:66 I tried various options of forcing it to try to mount as nfsv3 such as: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport or: /defaults type:=nfs;rhost:=kerberos.monzell.com;rfs:=/exports;\ sublink:=${key};opts:=rw,nodev,nosuid,soft,intr,vers=-3,proto=tcp,resvport * ${host}==${rhost};type:=nfs;fs:=${rfs};opts:=rw,nodev,nosuid,soft,intr,vers=3,proto=tcp,resvport Nothing yet still. Curious enough, OpenBSD mounts defaults to version 3, so I am not sure why it would start with version in amd. What would be the correct options to pass?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >