Search Results

Search found 5109 results on 205 pages for 'specify'.

Page 147/205 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • firebug saying not a function

    - by Aaron
    <script type = "text/javascript"> var First_Array = new Array(); function reset_Form2() {document.extraInfo.reset();} function showList1() {document.getElementById("favSports").style.visibility="visible";} function showList2() {document.getElementById("favSubjects").style.visibility="visible";} function hideProceed() {document.getElementById('proceed').style.visibility='hidden';} function proceedToSecond () { document.getElementById("div1").style.visibility="hidden"; document.getElementById("div2").style.visibility="visible"; document.getElementById("favSports").style.visibility="hidden"; document.getElementById("favSubjects").style.visibility="hidden"; } function backToFirst () { document.getElementById("div1").style.visibility="visible"; document.getElementById("div2").style.visibility="hidden"; document.getElementById("favSports").style.visibility="visible"; document.getElementById("favSubjects").style.visibility="visible"; } function reset_Form(){ document.personalInfo.reset(); document.getElementById("favSports").style.visibility="hidden"; document.getElementById("favSubjects").style.visibility="hidden"; } function isValidName(firstStr) { var firstPat = /^([a-zA-Z]+)$/; var matchArray = firstStr.match(firstPat); if (matchArray == null) { alert("That's a weird name, try again"); return false; } return true; } function isValidZip(zipStr) { var zipPat =/[0-9]{5}/; var matchArray = zipStr.match(zipPat); if(matchArray == null) { alert("Zip is not in valid format"); return false; } return true; } function isValidApt(aptStr) { var aptPat = /[\d]/; var matchArray = aptStr.match(aptPat); if(matchArray == null) { if (aptStr=="") { return true; } alert("Apt is not proper format"); return false; } return true; } function isValidDate(dateStr) { //requires 4 digit year: var datePat = /^(\d{1,2})(\/|-)(\d{1,2})\2(\d{4})$/; var matchArray = dateStr.match(datePat); if (matchArray == null) { alert("Date is not in a valid format."); return false; } return true; } function checkRadioFirst() { var rb = document.personalInfo.salutation; for(var i=0;i<rb.length;i++) { if(rb[i].checked) { return true; } } alert("Please specify a salutation"); return false; } function checkCheckFirst() { var rb = document.personalInfo.operatingSystems; for(var i=0;i<rb.length;i++) { if(rb[i].checked) { return true; } } alert("Please specify an operating system") ; return false; } function checkSelectFirst() { if ( document.personalInfo.sports.selectedIndex == -1) { alert ( "Please select a sport" ); return false; } return true; } function checkRadioSecond() { var rb = document.extraInfo.referral; for(var i=0;i<rb.length;i++) { if(rb[i].checked) { return true; } } alert("Please select form of referral"); return false; } function checkCheckSecond() { var rb = document.extraInfo.officeSupplies; for(var i=0;i<rb.length;i++) { if(rb[i].checked) { return true; } } alert("Please select an office supply option"); return false; } function checkSelectSecond() { if ( document.extraInfo.colorPick.selectedIndex == 0 ) { alert ( "Please select a favorite color" ); return false; } return true; } function check_Form(){ var retvalue = isValidDate(document.personalInfo.date.value); if(retvalue) { retvalue = isValidZip(document.personalInfo.zipCode.value); if(retvalue) { retvalue = isValidName(document.personalInfo.nameFirst.value); if(retvalue) { retvalue = checkRadioFirst(); if(retvalue) { retvalue = checkCheckFirst(); if(retvalue) { retvalue = checkSelectFirst(); if(retvalue) { retvalue = isValidApt(document.personalInfo.aptNum.value); if(retvalue){ document.getElementById('proceed').style.visibility='visible'; var rb = document.personalInfo.salutation; for(var i=0;i<rb.length;i++) { if(rb[i].checked) { var salForm = rb[i].value; } } var SportsOptions = ""; for(var j=0;j<document.personalInfo.sports.length;j++){ if ( document.personalInfo.sports.options[j].selected){ SportsOptions += document.personalInfo.sports.options[j].value + " "; } } var SubjectsOptions= ""; for(var k=0;k<document.personalInfo.subjects.length;k++){ if ( document.personalInfo.subjects.options[k].selected){ SubjectsOptions += document.personalInfo.subjects.options[k].value + " "; } } var osBox = document.personalInfo.operatingSystems; var OSOptions = ""; for(var y=0;y<osBox.length;y++) { if(osBox[y].checked) { OSOptions += osBox[y].value + " "; } } First_Array[0] = salForm; First_Array[1] = document.personalInfo.nameFirst.value; First_Array[2] = document.personalInfo.nameMiddle.value; First_Array[3] = document.personalInfo.nameLast.value; First_Array[4] = document.personalInfo.address.value; First_Array[5] = document.personalInfo.aptNum.value; First_Array[6] = document.personalInfo.city.value; for(var l=0; l<document.personalInfo.state.length; l++) { if (document.personalInfo.state.options[l].selected) { First_Array[7] = document.personalInfo.state[l].value; } } First_Array[8] = document.personalInfo.zipCode.value; First_Array[9] = document.personalInfo.date.value; First_Array[10] = document.personalInfo.phone.value; First_Array[11] = SportsOptions; First_Array[12] = SubjectsOptions; First_Array[13] = OSOptions; alert("Everything looks good."); document.getElementById('validityButton').style.visibility='hidden'; } } } } } } } } /*function formAction2() { var retvalue; retvalue = checkRadioSecond(); if(!retvalue) { return retvalue; } retvalue = checkCheckSecond(); if(!retvalue) { return retvalue; } return checkSelectSecond() ; } */ </script> This is just a sample of the code, there are alot more functions, but I thought the error might be related to surrounding code. I have absolutely no idea why, as I know all the surrounding functions execute, and First_Array is populated. However when I click the Proceed to Second button, the onclick attribute does not execute because Firebug says proceedToSecond is not a function button code: <input type="button" id="proceed" name="proceedToSecond" onclick="proceedToSecond();" value="Proceed to second form">

    Read the article

  • .NET Regex: Howto extract IPv6 address parts

    - by Quandary
    Question: How does the .NET regex string to extract IPv6 addresses look like ? I can get it to extract a simple IPv6 address like "1050:0:0:0:5:600:300c:326b" but not the colon format ("ff06::c3"); My problem is, it should extract a 0 for every omitted value between the :: How do I do that? Below my code + description. Specify IPv6 addresses by omitting leading zeros. For example, IPv6 address 1050:0000:0000:0000:0005:0600:300c:326b may be written as 1050:0:0:0:5:600:300c:326b. Double colon Specify IPv6 addresses by using double colons (::) in place of a series of zeros. For example, IPv6 address ff06:0:0:0:0:0:0:c3 may be written as ff06::c3. Double colons may be used only once in an IP address. strInputString = "ff06::c3"; strInputString = "1050:0000:0000:0000:0005:0600:300c:326b"; string strPattern = "([A-Fa-f0-9]{1,4}:){7}([A-Fa-f0-9]{1,4})"; //strPattern = @"\A(?:[0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}\z"; //strPattern = @"(\A([0-9a-f]{1,4}:){1,1}(:[0-9a-f]{1,4}){1,6}\Z)|(\A([0-9a-f]{1,4}:){1,2}(:[0-9a-f]{1,4}){1,5}\Z)|(\A([0-9a-f]{1,4}:){1,3}(:[0-9a-f]{1,4}){1,4}\Z)|(\A([0-9a-f]{1,4}:){1,4}(:[0-9a-f]{1,4}){1,3}\Z)|(\A([0-9a-f]{1,4}:){1,5}(:[0-9a-f]{1,4}){1,2}\Z)|(\A([0-9a-f]{1,4}:){1,6}(:[0-9a-f]{1,4}){1,1}\Z)|(\A(([0-9a-f]{1,4}:){1,7}|:):\Z)|(\A:(:[0-9a-f]{1,4}){1,7}\Z)|(\A((([0-9a-f]{1,4}:){6})(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3})\Z)|(\A(([0-9a-f]{1,4}:){5}[0-9a-f]{1,4}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3})\Z)|(\A([0-9a-f]{1,4}:){5}:[0-9a-f]{1,4}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A([0-9a-f]{1,4}:){1,1}(:[0-9a-f]{1,4}){1,4}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A([0-9a-f]{1,4}:){1,2}(:[0-9a-f]{1,4}){1,3}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A([0-9a-f]{1,4}:){1,3}(:[0-9a-f]{1,4}){1,2}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A([0-9a-f]{1,4}:){1,4}(:[0-9a-f]{1,4}){1,1}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A(([0-9a-f]{1,4}:){1,5}|:):(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)|(\A:(:[0-9a-f]{1,4}){1,5}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z) "; //strPattern = @"/^\s*((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?\s*$/"; //strPattern = @"(:?[0-9a-fA-F]{1,4}:){7}([0-9a-fA-F]{1,4})\z"; //strPattern = @"\A((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?)::((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?)\z"; //strPattern = @"\A((?:[0-9A-Fa-f]{1,4}(?::[0-9A-Fa-f]{1,4})*)?)::((?:[0-9A-Fa-f]{1,4}:)*)(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\z"; //strPattern = @"/^(?:(?:(?:(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){7})|(?:(?!(?:.*[a-f0-9](?::|$)){7,})(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){0,5})?::(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){0,5})?)))|(?:(?:(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){5}:)|(?:(?!(?:.*[a-f0-9]:){5,})(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){0,3})?::(?:[a-f0-9]{1,4}(?::[a-f0-9]{1,4}){0,3}:)?))?(?:(?:25[0-5])|(?:2[0-4][0-9])|(?:1[0-9]{2})|(?:[1-9]?[0-9]))(?:\.(?:(?:25[0-5])|(?:2[0-4][0-9])|(?:1[0-9]{2})|(?:[1-9]?[0-9]))){3}))$/i"; System.Text.RegularExpressions.Regex reValidationRule = new System.Text.RegularExpressions.Regex("^" + strPattern + "$"); if (reValidationRule.Match(strInputString).Success) // If matching pattern { System.Text.RegularExpressions.Match maResult = System.Text.RegularExpressions.Regex.Match(strInputString, strPattern); // Console.WriteLine(maResult.Groups.Count) string[] astrReturnValues = new string[4]; System.Text.RegularExpressions.GroupCollection gc = maResult.Groups; System.Text.RegularExpressions.CaptureCollection cc; int counter; //System.Web.Script.Serialization.JavaScriptSerializer jssJSONserializer = new System.Web.Script.Serialization.JavaScriptSerializer(); //Console.WriteLine(jssJSONserializer.Serialize()); // Loop through each group. for (int i = 0; i < gc.Count; i++) { Console.WriteLine("Group: {0}", i); cc = gc[i].Captures; counter = cc.Count; // Print number of captures in this group. Console.WriteLine("Captures count = " + counter.ToString()); // Loop through each capture in group. for (int ii = 0; ii < counter; ii++) { Console.WriteLine("Capture: {0}", ii); // Print capture and position. Console.WriteLine(cc[ii] + " Starts at character " + cc[ii].Index); } }

    Read the article

  • Problems installing Memcache (PECL extension)

    - by Petrus
    I have installed memcached fine, and now I will need to install PECL extension memcache. Im running RedHat x86_64 es5. The installation gives me this: downloading memcache-2.2.6.tgz ... Starting to download memcache-2.2.6.tgz (35,957 bytes) ..........done: 35,957 bytes 11 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 Enable memcache session handler support? [yes] : Notice: Use of undefined constant STDIN - assumed 'STDIN' in PEAR/Frontend/CLI.php on line 304 Warning: fgets() expects parameter 1 to be resource, string given in PEAR/Frontend/CLI.php on line 304 Warning: fgets() expects parameter 1 to be resource, string given in /usr/lib/php/PEAR/Frontend/CLI.php on line 304 building in /root/tmp/pear-build-root/memcache-2.2.6 running: /root/tmp/pear/memcache/configure --enable-memcache-session=yes checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ANSI C... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... re2c checking for re2c version... invalid configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking whether to enable memcache support... yes, shared checking whether to enable memcache session handler support... yes checking for the location of ZLIB... no checking for the location of zlib... /usr checking for session includes... /usr/include/php checking for memcache session support... enabled checking for ld used by cc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognize dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 98304 checking command to parse /usr/bin/nm -B output from cc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if cc supports -fno-rtti -fno-exceptions... no checking for cc option to produce PIC... -fPIC checking if cc PIC flag -fPIC works... yes checking if cc static flag -static works... yes checking if cc supports -c -o file.o... yes checking whether the cc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache.c -o memcache.lo mkdir .libs cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache.c -fPIC -DPIC -o .libs/memcache.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_queue.c -o memcache_queue.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_queue.c -fPIC -DPIC -o .libs/memcache_queue.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_standard_hash.c -o memcache_standard_hash.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_standard_hash.c -fPIC -DPIC -o .libs/memcache_standard_hash.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_consistent_hash.c -o memcache_consistent_hash.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_consistent_hash.c -fPIC -DPIC -o .libs/memcache_consistent_hash.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_session.c -o memcache_session.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_session.c -fPIC -DPIC -o .libs/memcache_session.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=link cc -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -o memcache.la -export-dynamic -avoid-version -prefer-pic -module -rpath /root/tmp/pear-build-root/memcache-2.2.6/modules memcache.lo memcache_queue.lo memcache_standard_hash.lo memcache_consistent_hash.lo memcache_session.lo cc -shared .libs/memcache.o .libs/memcache_queue.o .libs/memcache_standard_hash.o .libs/memcache_consistent_hash.o .libs/memcache_session.o -Wl,-soname -Wl,memcache.so -o .libs/memcache.so creating memcache.la (cd .libs && rm -f memcache.la && ln -s ../memcache.la memcache.la) /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=install cp ./memcache.la /root/tmp/pear-build-root/memcache-2.2.6/modules cp ./.libs/memcache.so /root/tmp/pear-build-root/memcache-2.2.6/modules/memcache.so cp ./.libs/memcache.lai /root/tmp/pear-build-root/memcache-2.2.6/modules/memcache.la PATH="$PATH:/sbin" ldconfig -n /root/tmp/pear-build-root/memcache-2.2.6/modules ---------------------------------------------------------------------- Libraries have been installed in: /root/tmp/pear-build-root/memcache-2.2.6/modules If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the `-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the `LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the `LD_RUN_PATH' environment variable during linking - use the `-Wl,--rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to `/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- Build complete. Don't forget to run 'make test'. running: make INSTALL_ROOT="/root/tmp/pear-build-root/install-memcache-2.2.6" install Installing shared extensions: /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions/no-debug-non-zts-20090626/ running: find "/root/tmp/pear-build-root/install-memcache-2.2.6" | xargs ls -dils 361232 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6 361263 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr 361264 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib 361265 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php 361266 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions 361267 4 drwxr-xr-x 2 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions/no-debug-non-zts-20090626 361262 236 -rwxr-xr-x 1 root root 235575 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions/no-debug-non-zts-20090626/memcache.so Build process completed successfully Installing '/usr/lib/php/extensions/no-debug-non-zts-20090626/memcache.so' install ok: channel://pecl.php.net/memcache-2.2.6 Extension memcache enabled in php.ini The memcache.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 I tried as well to install this extension "memcached 1.0.2 (PHP extension for interfacing with memcached via libmemcached library)" but it failed: downloading memcached-1.0.2.tgz ... Starting to download memcached-1.0.2.tgz (22,724 bytes) ........done: 22,724 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /root/tmp/pear-build-root/memcached-1.0.2 running: /root/tmp/pear/memcached/configure checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ANSI C... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... re2c checking for re2c version... invalid configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking whether to enable memcached support... yes, shared checking for libmemcached... yes, shared checking whether to enable memcached session handler support... yes checking whether to enable memcached igbinary serializer support... no checking for ZLIB... yes, shared checking for zlib location... /usr checking for session includes... /usr/include/php checking for memcached session support... enabled checking for memcached igbinary support... disabled checking for libmemcached location... configure: error: memcached support requires libmemcached. Use --with-libmemcached-dir= to specify the prefix where libmemcached headers and library are located ERROR: `/root/tmp/pear/memcached/configure' failed The memcached.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 Is there a kind soul out there that can solve this puzzle?

    Read the article

  • How to implement multi-source XSLT mapping in 11g BPEL

    - by [email protected]
    In SOA 11g, you can create a XSLT mapper that uses multiple sources as the input. To implement a multi-source mapper, just follow the instructions below, Drag and drop a Transform Activity to a BPEL process Double-click on the Transform Activity, the Transform dialog window appears. Add source variables by clicking the Add icon and selecting the variable and part of the variable as needed. You can select multiple input variables. The first variable represents the main XML input to the XSL mapping, while additional variables that are added here are defined in the XSL mapping as input parameters. Select the target variable and its part if available. Specify the mapper file name, the default file name is xsl/Transformation_%SEQ%.xsl, where %SEQ% represents the sequence number of the mapper. Click OK, the xls file will be opened in the graphical mode. You can map the sources to the target as usual. Open the mapper source code, you will notice the variable representing the additional source payload, is defined as the input parameter in the map source spec and body<mapSources>    <source type="XSD">      <schema location="../xsd/po.xsd"/>      <rootElement name="PurchaseOrder" namespace="http://www.oracle.com/pcbpel/po"/>    </source>    <source type="XSD">      <schema location="../xsd/customer.xsd"/>      <rootElement name="Customer" namespace="http://www.oracle.com/pcbpel/Customer"/>      <param name="v_customer" />    </source>  </mapSources>...<xsl:param name="v_customer"/> Let's take a look at the BPEL source code used to execute xslt mapper. <assign name="Transform_1">            <bpelx:annotation>                <bpelx:pattern>transformation</bpelx:pattern>            </bpelx:annotation>            <copy>                <from expression="ora:doXSLTransformForDoc('xsl/Transformation_1.xsl',bpws:getVariableData('v_po'),'v_customer',bpws:getVariableData('v_customer'))"/>                <to variable="v_invoice"/>            </copy>        </assign> You will see BPEL uses ora:doXSLTransformForDoc XPath function to execute the XSLT mapper.This function returns the result of  XSLT transformation when the xslt template matching the document. The signature of this function is  ora:doXSLTransformForDoc(template,input, [paramQName, paramValue]*).Wheretemplate is the XSLT mapper nameinput is the string representation of xml input, paramQName is the parameter defined in the xslt mapper as the additional sourceparameterValue is the additional source payload. You can add more sources to the mapper at the later stage, but you have to modify the ora:doXSLTransformForDoc in the BPEL source code and make sure it passes correct parameter and its value pair that reflects the changes in the XSLT mapper.So the best practices are : create the variables before creating the mapping file, therefore you can add multiple sources when you define the transformation in the first place, which is more straightforward than adding them later on. Review ora:doXSLTransformForDoc code in the BPEL source and make sure it passes the correct parameters to the mapper.

    Read the article

  • SQL SERVER – Fix : Error : 3117 : The log or differential backup cannot be restored because no files

    - by pinaldave
    I received the following email from one of my readers. Dear Pinal, I am new to SQL Server and our regular DBA is on vacation. Our production database had some problem and I have just restored full database backup to production server. When I try to apply log back I am getting following error. I am sure, this is valid log backup file. Screenshot is attached. [Few other details regarding server/ip address removed] Msg 3117, Level 16, State 1, Line 1 The log or differential backup cannot be restored because no files are ready to roll forward. Msg 3013, Level 16, State 1, Line 1 RESTORE LOG is terminating abnormally. Screenshot attached. [Removed as it contained live IP address] Please help immediately. Well I have answered this question in my earlier post, 2 years ago, over here SQL SERVER – Fix : Error : Msg 3117, Level 16, State 4 The log or differential backup cannot be restored because no files are ready to rollforward. However, I will try to explain it a little more this time. For SQL Server database to be used it should in online state. There are multiple states of SQL Server Database. ONLINE (Available – online for data) OFFLINE RESTORING RECOVERING RECOVERY PENDING SUSPECT EMERGENCY (Limited Availability) If the database is online, it means it is active and in operational mode. It will not make sense to apply further log from backup if the operations have continued on this database. The common practice during the backup restore process is to specify the keyword RECOVERY when the database is restored. When RECOVERY keyword is specified, the SQL Server brings back the database online and will not accept any further log backups. However, if you want to restore more than one backup files, i.e. after restoring the full back up if you want to apply further differential or log backup you cannot do that when database is online and already active. You need to have your database in the state where it can further accept the backup data and not the online data request. If the SQL Server is online and also accepts database backup file, then there can be data inconsistency. This is the reason that when there are more than one database backup files to be restored, one has to restore the database with NO RECOVERY keyword in the RESTORE operation. I suggest you all to read one more post written by me earlier. In this post, I explained the time line with image and graphic SQL SERVER – Backup Timeline and Understanding of Database Restore Process in Full Recovery Model. Sample Code for reference: RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorksFull.bak' WITH NORECOVERY; RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorksDiff.bak' WITH RECOVERY; In this post, I am not trying to cover complete backup and recovery. I am just attempting to address one type of error and its resolution. Please test these scenarios on the development server. Playing with live database backup and recovery is always very crucial and needs to be properly planned. Leave a comment here if you need help with this subject. Similar Post: SQL SERVER – Restore Sequence and Understanding NORECOVERY and RECOVERY Note: We will cover Standby Server maintenance and Recovery in another blog post and it is intentionally, not covered this post. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that contains my working 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. When installation completes and the system boots the 3TB drive, it reports "unknown filesystem". After installation on the 250GB drives, the system boots up fine. During every install I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I never bothered setting the BIOS to "non-EFI". I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB /boot ext4 8GB swap 3TB / ext4 But I've also tried the following, just in case it matters: 8GB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue> Note: I have two DVD burners in the system, hence the duplicate line above. Note: I install and boot 64-bit ubuntu v12.04 on both of my 250GB in this same system, but still cannot make the 3TB drive boot. Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04 FAILS) HDD == seagate 1TB SATA3 @ 7200 rpm (64-bit v10.04 WORKS for two years) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) 64-bit ubuntu v10.04 has booted and run fine on the seagate 1TB drive for 2 years.

    Read the article

  • JMX Based Monitoring - Part Three - Web App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I showed a technique for integrating a JMX console with Oracle WebLogic which is a standard feature of Oracle WebLogic 11g. Customers on other Web Application servers and other versions of Oracle WebLogic can refer to the documentation provided with the server to do a similar thing. In this blog entry I am going to discuss a new feature that is only present in Oracle Utilities Application Framework 4 and above that allows JMX to be used for management and monitoring the Oracle Utilities Web Applications. In this case JMX can be used to perform monitoring as well as provide the management of the cache. In Oracle Utilities Application Framework you can enable Web Application Server JMX monitoring that is unique to the framework by specifying a JMX port number in RMI Port number for JMX Web setting and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6740 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6740/oracle/ouaf/webAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enabled ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled ouaf.jmx.* files - contain the userid and password. The default setup uses the JMX default security configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see the JMX Site. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes, I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.management.connnector.url.default entry and the credentials you specified in the jmx.remote.x.* files. Remember these are encrypted by default so if you try and view the file you may be able to decipher it visually. For example: There are three Mbeans available to you: flushBean - This is a JMX replacement for the jsp versions of the flush utilities provided in previous releases of the Oracle Utilities Application Framework. You can manage the cache using the provided operations from JMX. The jsp versions of the flush utilities are still provided, for backward compatibility, but now are authorization controlled. JVMInfo - This is a JMX replacement for the jsp version of the JVMInfo screen used by support to get a handle on JVM information. This information is environmental not operational and is used for support purposes. The jsp versions of the JVMInfo utilities are still provided, for backward compatibility, but now is also authorization controlled. JVMSystem - This is an implementation of the Java system MXBeans for use in monitoring. We provide our own implementation of the base Mbeans to save on creating another JMX configuration for internal monitoring and to provide a consistent interface across platforms for the MXBeans. This Mbean is disabled by default and can be enabled using the enableJVMSystemBeans operation. This Mbean allows for the monitoring of the ClassLoading, Memory, OperatingSystem, Runtime and the Thread MX beans. Refer to the Server Administration Guides provided with your product and the Technical Best Practices Whitepaper for information about individual statistics. The Web Application Server JMX monitoring allows greater visibility for monitoring and management of the Oracle Utilities Application Framework application from jconsole or any JSR160 compliant JMX browser or JMX console.

    Read the article

  • C# Proposal: Compile Time Static Checking Of Dynamic Objects

    - by Paulo Morgado
    C# 4.0 introduces a new type: dynamic. dynamic is a static type that bypasses static type checking. This new type comes in very handy to work with: The new languages from the dynamic language runtime. HTML Document Object Model (DOM). COM objects. Duck typing … Because static type checking is bypassed, this: dynamic dynamicValue = GetValue(); dynamicValue.Method(); is equivalent to this: object objectValue = GetValue(); objectValue .GetType() .InvokeMember( "Method", BindingFlags.InvokeMethod, null, objectValue, null); Apart from caching the call site behind the scenes and some dynamic resolution, dynamic only looks better. Any typing error will only be caught at run time. In fact, if I’m writing the code, I know the contract of what I’m calling. Wouldn’t it be nice to have the compiler do some static type checking on the interactions with these dynamic objects? Imagine that the dynamic object that I’m retrieving from the GetValue method, besides the parameterless method Method also has a string read-only Property property. This means that, from the point of view of the code I’m writing, the contract that the dynamic object returned by GetValue implements is: string Property { get; } void Method(); Since it’s a well defined contract, I could write an interface to represent it: interface IValue { string Property { get; } void Method(); } If dynamic allowed to specify the contract in the form of dynamic(contract), I could write this: dynamic(IValue) dynamicValue = GetValue(); dynamicValue.Method(); This doesn’t mean that the value returned by GetValue has to implement the IValue interface. It just enables the compiler to verify that dynamicValue.Method() is a valid use of dynamicValue and dynamicValue.OtherMethod() isn’t. If the IValue interface already existed for any other reason, this would be fine. But having a type added to an assembly just for compile time usage doesn’t seem right. So, dynamic could be another type construct. Something like this: dynamic DValue { string Property { get; } void Method(); } The code could now be written like this; DValue dynamicValue = GetValue(); dynamicValue.Method(); The compiler would never generate any IL or metadata for this new type construct. It would only thee used for compile type static checking of dynamic objects. As a consequence, it makes no sense to have public accessibility, so it would not be allowed. Once again, if the IValue interface (or any other type definition) already exists, it can be used in the dynamic type definition: dynamic DValue : IValue, IEnumerable, SomeClass { string Property { get; } void Method(); } Another added benefit would be IntelliSense. I’ve been getting mixed reactions to this proposal. What do you think? Would this be useful?

    Read the article

  • Parallelism in .NET – Part 19, TaskContinuationOptions

    - by Reed
    My introduction to Task continuations demonstrates continuations on the Task class.  In addition, I’ve shown how continuations allow handling of multiple tasks in a clean, concise manner.  Continuations can also be used to handle exceptional situations using a clean, simple syntax. In addition to standard Task continuations , the Task class provides some options for filtering continuations automatically.  This is handled via the TaskContinationOptions enumeration, which provides hints to the TaskScheduler that it should only continue based on the operation of the antecedent task. This is especially useful when dealing with exceptions.  For example, we can extend the sample from our earlier continuation discussion to include support for handling exceptions thrown by the Factorize method: // Get a copy of the UI-thread task scheduler up front to use later var uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); // Start our task var factorize = Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }); // When we succeed, report the results to the UI factorize.ContinueWith(task => textBox1.Text = string.Format("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result), CancellationToken.None, TaskContinuationOptions.NotOnFaulted, uiScheduler); // When we have an exception, report it factorize.ContinueWith(task => textBox1.Text = string.Format("Error: {0}", task.Exception.Message), CancellationToken.None, TaskContinuationOptions.OnlyOnFaulted, uiScheduler); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code works by using a combination of features.  First, we schedule our task, the same way as in the previous example.  However, in this case, we use a different overload of Task.ContinueWith which allows us to specify both a specific TaskScheduler (in order to have your continuation run on the UI’s synchronization context) as well as a TaskContinuationOption.  In the first continuation, we tell the continuation that we only want it to run when there was not an exception by specifying TaskContinuationOptions.NotOnFaulted.  When our factorize task completes successfully, this continuation will automatically run on the UI thread, and provide the appropriate feedback. However, if the factorize task has an exception – for example, if the Factorize method throws an exception due to an improper input value, the second continuation will run.  This occurs due to the specification of TaskContinuationOptions.OnlyOnFaulted in the options.  In this case, we’ll report the error received to the user. We can use TaskContinuationOptions to filter our continuations by whether or not an exception occurred and whether or not a task was cancelled.  This allows us to handle many situations, and is especially useful when trying to maintain a valid application state without ever blocking the user interface.  The same concepts can be extended even further, and allow you to chain together many tasks based on the success of the previous ones.  Continuations can even be used to create a state machine with full error handling, all without blocking the user interface thread.

    Read the article

  • Enabling OUD Entry Cache for large static groups

    - by Sylvain Duloutre
    Oracle Unified Directory can take advantage of several caches to improve performances. especially the so-called database cache and the file system cache. In addition to that, it is possible to use an entry cache to cache LDAP entries. By default, the entry cache is not used. In specific deployements involving large static groups, it may worth loading the group entries to the entry cache to speed up group membership and group-based aci evaluation. To do so, run the following commands: First, specify which entries should reside in the entry cache. In the commad below, only entries matching the LDAP filter " (|(objctclass=groupOfNames)(objectclass=groupOfUniqueNames)) " will be stored in the entry cache. dsconfig set-entry-cache-prop \          --cache-name FIFO \          --add include-filter:\(\|\(objctclass=groupOfNames\)\(objectclass=groupOfUniqueNames\)\)          --port <ADMIN_PORT> \          --bindDN cn=Directory\ Manager \          --bindPassword ****** \          --no-prompt Then enable the entry cache: dsconfig set-entry-cache-prop \          --cache-name FIFO \          --set enabled:true \          --port <ADMIN_PORT> \          --bindDN cn=Directory\ Manager \          --bindPassword ****** \          --no-prompt In addition to that, you can control how much memory the entry cache can use: oud@s96sec1d0-v3:/application/oud : dsconfig -X -n -p <ADMIN PORT> -D "cn=Directory Manager" -w <password> get-entry-cache-prop --cache-name FIFO Property           : Value(s) -------------------:----------------------------------------------------------- cache-level        : 1 enabled            : true exclude-filter     : - include-filter     : (|(objctclass=groupOfNames)(objectclass=groupOfUniqueNames)) max-entries        : 2147483647 max-memory-percent : 90 You can change the max-entries amd max-memory-percent properties to control the entry cache size using the dsconfig set-entry-cache-prop command.

    Read the article

  • Subterranean IL: Pseudo custom attributes

    - by Simon Cooper
    Custom attributes were designed to make the .NET framework extensible; if a .NET language needs to store additional metadata on an item that isn't expressible in IL, then an attribute could be applied to the IL item to represent this metadata. For instance, the C# compiler uses DecimalConstantAttribute and DateTimeConstantAttribute to represent compile-time decimal or datetime constants, which aren't allowed in pure IL, and FixedBufferAttribute to represent fixed struct fields. How attributes are compiled Within a .NET assembly are a series of tables containing all the metadata for items within the assembly; for instance, the TypeDef table stores metadata on all the types in the assembly, and MethodDef does the same for all the methods and constructors. Custom attribute information is stored in the CustomAttribute table, which has references to the IL item the attribute is applied to, the constructor used (which implies the type of attribute applied), and a binary blob representing the arguments and name/value pairs used in the attribute application. For example, the following C# class: [Obsolete("Please use MyClass2", true)] public class MyClass { // ... } corresponds to the following IL class definition: .class public MyClass { .custom instance void [mscorlib]System.ObsoleteAttribute::.ctor(string, bool) = { string('Please use MyClass2' bool(true) } // ... } and results in the following entry in the CustomAttribute table: TypeDef(MyClass) MemberRef(ObsoleteAttribute::.ctor(string, bool)) blob -> {string('Please use MyClass2' bool(true)} However, there are some attributes that don't compile in this way. Pseudo custom attributes Just like there are some concepts in a language that can't be represented in IL, there are some concepts in IL that can't be represented in a language. This is where pseudo custom attributes come into play. The most obvious of these is SerializableAttribute. Although it looks like an attribute, it doesn't compile to a CustomAttribute table entry; it instead sets the serializable bit directly within the TypeDef entry for the type. This flag is fully expressible within IL; this C#: [Serializable] public class MySerializableClass {} compiles to this IL: .class public serializable MySerializableClass {} For those interested, a full list of pseudo custom attributes is available here. For the rest of this post, I'll be concentrating on the ones that deal with P/Invoke. P/Invoke attributes P/Invoke is built right into the CLR at quite a deep level; there are 2 metadata tables within an assembly dedicated solely to p/invoke interop, and many more that affect it. Furthermore, all the attributes used to specify p/invoke methods in C# or VB have their own keywords and syntax within IL. For example, the following C# method declaration: [DllImport("mscorsn.dll", SetLastError = true)] [return: MarshalAs(UnmanagedType.U1)] private static extern bool StrongNameSignatureVerificationEx( [MarshalAs(UnmanagedType.LPWStr)] string wszFilePath, [MarshalAs(UnmanagedType.U1)] bool fForceVerification, [MarshalAs(UnmanagedType.U1)] ref bool pfWasVerified); compiles to the following IL definition: .method private static pinvokeimpl("mscorsn.dll" lasterr winapi) bool marshal(unsigned int8) StrongNameSignatureVerificationEx( string marshal(lpwstr) wszFilePath, bool marshal(unsigned int8) fForceVerification, bool& marshal(unsigned int8) pfWasVerified) cil managed preservesig {} As you can see, all the p/invoke and marshal properties are specified directly in IL, rather than using attributes. And, rather than creating entries in CustomAttribute, a whole bunch of metadata is emitted to represent this information. This single method declaration results in the following metadata being output to the assembly: A MethodDef entry containing basic information on the method Four ParamDef entries for the 3 method parameters and return type An entry in ModuleRef to mscorsn.dll An entry in ImplMap linking ModuleRef and MethodDef, along with the name of the function to import and the pinvoke options (lasterr winapi) Four FieldMarshal entries containing the marshal information for each parameter. Phew! Applying attributes Most of the time, when you apply an attribute to an element, an entry in the CustomAttribute table will be created to represent that application. However, some attributes represent concepts in IL that aren't expressible in the language you're coding in, and can instead result in a single bit change (SerializableAttribute and NonSerializedAttribute), or many extra metadata table entries (the p/invoke attributes) being emitted to the output assembly.

    Read the article

  • Optional Parameters and Named Arguments in C# 4 (and a cool scenario w/ ASP.NET MVC 2)

    - by ScottGu
    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the seventeenth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release. Today’s post covers two new language feature being added to C# 4.0 – optional parameters and named arguments – as well as a cool way you can take advantage of optional parameters (both in VB and C#) with ASP.NET MVC 2. Optional Parameters in C# 4.0 C# 4.0 now supports using optional parameters with methods, constructors, and indexers (note: VB has supported optional parameters for awhile). Parameters are optional when a default value is specified as part of a declaration.  For example, the method below takes two parameters – a “category” string parameter, and a “pageIndex” integer parameter.  The “pageIndex” parameter has a default value of 0, and as such is an optional parameter: When calling the above method we can explicitly pass two parameters to it: Or we can omit passing the second optional parameter – in which case the default value of 0 will be passed:   Note that VS 2010’s Intellisense indicates when a parameter is optional, as well as what its default value is when statement completion is displayed: Named Arguments and Optional Parameters in C# 4.0 C# 4.0 also now supports the concept of “named arguments”.  This allows you to explicitly name an argument you are passing to a method – instead of just identifying it by argument position.  For example, I could write the code below to explicitly identify the second argument passed to the GetProductsByCategory method by name (making its usage a little more explicit): Named arguments come in very useful when a method supports multiple optional parameters, and you want to specify which arguments you are passing.  For example, below we have a method DoSomething that takes two optional parameters: We could use named arguments to call the above method in any of the below ways: Because both parameters are optional, in cases where only one (or zero) parameters is specified then the default value for any non-specified arguments is passed. ASP.NET MVC 2 and Optional Parameters One nice usage scenario where we can now take advantage of the optional parameter support of VB and C# is with ASP.NET MVC 2’s input binding support to Action methods on Controller classes. For example, consider a scenario where we want to map URLs like “Products/Browse/Beverages” or “Products/Browse/Deserts” to a controller action method.  We could do this by writing a URL routing rule that maps the URLs to a method like so: We could then optionally use a “page” querystring value to indicate whether or not the results displayed by the Browse method should be paged – and if so which page of the results should be displayed.  For example: /Products/Browse/Beverages?page=2. With ASP.NET MVC 1 you would typically handle this scenario by adding a “page” parameter to the action method and make it a nullable int (which means it will be null if the “page” querystring value is not present).  You could then write code like below to convert the nullable int to an int – and assign it a default value if it was not present in the querystring: With ASP.NET MVC 2 you can now take advantage of the optional parameter support in VB and C# to express this behavior more concisely and clearly.  Simply declare the action method parameter as an optional parameter with a default value: C# VB If the “page” value is present in the querystring (e.g. /Products/Browse/Beverages?page=22) then it will be passed to the action method as an integer.  If the “page” value is not in the querystring (e.g. /Products/Browse/Beverages) then the default value of 0 will be passed to the action method.  This makes the code a little more concise and readable. Summary There are a bunch of great new language features coming to both C# and VB with VS 2010.  The above two features (optional parameters and named parameters) are but two of them.  I’ll blog about more in the weeks and months ahead. If you are looking for a good book that summarizes all the language features in C# (including C# 4.0), as well provides a nice summary of the core .NET class libraries, you might also want to check out the newly released C# 4.0 in a Nutshell book from O’Reilly: It does a very nice job of packing a lot of content in an easy to search and find samples format. Hope this helps, Scott

    Read the article

  • Adding Blog to Your Orchard Website

    - by hajan
    One of the common features in today’s content management systems is to provide you the ability to create your own blog in your website. Also, having a blog is one of the very often needed features for various types of websites. Out of the box, Orchard gives you this, so you can create your own blog in your Orchard website on a pretty easy way. Besides the fact that you can very easily create your own blog, Orchard also gives you some extra features in relation with the support of blogging, such as connecting third-party client applications (e.g. Windows Live Writer) to your blog, so that you can publish blog posts remotely. You can already find all the information provided in this blog post on the http://orchardproject.net website, however I thought it would be nice to make summary in one blog post. I assume you have already installed Orchard and you are already familiar with its environment and administration dashboard. If you haven’t, please read this blog post first.   CREATE YOUR BLOG First of all, go to Orchard Administration Dashboard and click on Blog in the left menu Once you are there, you will see the following screen   Fill the form with all needed data, as in the following example and click Save Right after, you should see the following screen Click New post, and add your first post. After that, go to Homepage (click Your Site in the top-left corner) and you should see the Blog link in your menu After clicking on Blog, you will be directed to the following page Once you click on My First Post, you will see that your blog already supports commenting ability (you can enable/disable this from Administration dashboard in your blog settings) Added comment Adding new comment Submit comment So, with following these steps, you have already setup your blog in your Orchard website.   CONNECT YOUR BLOG WITH WINDOWS LIVE WRITER Since many bloggers prepare their blog posts using third-party client applications, like Windows Live Writer, its very useful if your blog engine has the ability to work with these third-party applications and enable them to make remote posting and publishing. The client applications use XmlRpc interface in order to have the ability to manage and publish the blogs remotely. What is great about Orchard is that it gives you out of the box the XmlRpc and Remote Publishing modules. What you only need to do is to enable these features from the Modules in your Orchard Administration Dashboard. So, lets go through the steps of enabling and making your previously created blog able to work with third-party client applications for blogging. 1. Go to Administration Dashboard and click the Modules After clicking the Modules, you will see the following page: As you can see, you already have Remote Blog Publishing and XmlRpc features for Content Publishing, but both are disabled by default. So, if you click Enable only on Remote Blog Publishing, you will see both of them enabled at once since they are dependent features. After you click Enable, if everything is Ok, the following message should be displayed: So, now we have the featured enabled and ready... The next thing you need to do is to open Windows Live Writer. First, open Windows Live Writer and in your Blog Accounts, click on Add blog account In the next window, chose Other services After that, click on your Blog link in the Orchard website and copy the URL, my URL (on localhost development server) is: http://localhost:8191/blog Then, add your login credentials you use to login in Orchard and click Next. After that, if you have setup everything successfully, the Windows Live Writer will do the rest Once it finishes, you will have window where you can specify the name of your blog you have just connected your Windows Live Writer to... Then... you are done. You can see Windows Live Writer has detected the Orchard theme I am using After you finish with the blog post, click on Publish and refresh the Blog page in your Orchard website You see, we have the blog post directly posted from Windows Live Writer to my Orchard Blog. I hope this was useful blog post. Regards, Hajan Reference and other useful posts: Build incredible content-driven websites using Orchard CMS Create blog on your site with Orchard CMS Blogging using Windows Live Writer in your Orchard CMS Blog Orchard Website

    Read the article

  • Java Script – Content delivery networks (CDN) can bit you in the butt.

    - by Ryan Ternier
    As much as I love the new CDN’s that Google, Microsoft and a few others have publically released, there are some strong gotchas that could come up and bite you in the ass if you’re not careful. But before we jump into that, for those that are not 100% sure what a CDN is (besides Canadian).   Content Delivery Network. A way of distributing your static content across various servers in different physical locations.  Because this static content is stored on many servers around the world, whenever a user needs to access this content, they are given the closest server to their location for this data. Already you can probably see the immediate bonuses to a system like this: Lower bandwidth Even small script files downloaded thousands of times will start to take a noticeable hit on your bandwidth meter. Less connections/hits to your web server which gives better latency If you manage many servers, you don’t need to manually update each server with scripts. A user will download a script for each website they visit. If a user is redirected to many domains/sub-domains within your web site, they might download many copies of the same file. When a system sees multiple requests from the same  domain, they will ignore the download   Those are just a handful of the many bonuses a CDN will give you. And for the average website, a CDN is great choice. Check out the following CDN links for their solutions: Google AJAX Library: http://code.google.com/apis/ajaxlibs/ Microsoft Ajax library: http://www.asp.net/ajaxlibrary/cdn.ashx The Gotcha There is always a catch. Here are some issues I found with using CDN’s that hopefully can help you make your decision. HTTP / HTTPS If you are running a website behind SSL, make sure that when you reference your CDN data that you use https:// vs. http://. If you forget this users will get a very nice message telling them that their secure connection is trying to access unsecure data. For a developer this is fairly simple, but general users will get a bit anxious when seeing this. Trusted Sites Internet Explorer has this really nifty feature that allows users to specify what sites they trust, and by some defaults IE7 only allows trusted sites to be viewed.  No problem, they set your website as trusted. But what about your CDN? If a user sets your websites to trusted, but not the CDN, they will not download those static files. This has the potential to totally break your web site. Pedantic Network Admins This alone is sometimes the killer of projects. However, always be careful when you are going to use a CDN for a professional project. If a network / security admin sees that you’re referencing an outside source, or that a call from a website might hit an outside domain.. panties will be bunched, emails will be spewed out and well, no one wants that.

    Read the article

  • Understanding Collabnet&rsquo;s LDAP binding

    - by Robert May
    We want to use both subversion usernames and passwords as well as Active Directory for our authentication on our Collabnet subversion server. This has proven to be more of a challenge than we thought, mostly because Collabnet’s documentation is pretty poor. To supplement that documentation, I add my own. The first thing to understand is that the attribute that you specify in the LDAP Login Attribute ONLY applies to lookups done for the user.  It does NOT apply to the LDAP Bind DN field.  Second, know that the debug logs (error is the one you want) don’t give you debug information for the bind DN, just the login attempts.  Third, by default, Active Directory does not allow anonymous binds, so you MUST put in a user that has the authority to query the Active Directory ldap. Because of these items, the values to set in those fields can be somewhat confusing.  You’ll want to have ADSI Edit handy (I also used ldp, which is installed by default on server 2008), since ADSI Edit can help you find stuff in your active directory.  Be careful, you can also break stuff. Here’s what should go into those fields. LDAP Security Level:  Should be set to None LDAP Server Host:  Should be set to the full name of a domain controller in your domain.  For example, dc.mydomain.com LDAP Server Port:  Should be set to 3268.  The default port of 389 will only query that specific server, not the global catalog.  By setting it to 3268, the global catalog will be queried, which is probably what you want. LDAP Base DN:  Should be set to the location where you want the search for users to begin.  By default, the search scope is set to sub, so all child organizational units below this setting will be searched.  In my case, I had created an OU specifically for users for group policies.  My value ended up being:  OU=MyOu,DC=domain,DC=org.   However, if you’re pointing it to the default Users folder, you may end up with something like CN=Users,DC=domain,DC=org (or com or whatever).  Again, use ADSI edit and use the Distinguished Name that it shows. LDAP Bind DN:  This needs to be the Distinguished Name of the user that you’re going to use for binding (i.e. the user you’ll be impersonating) for doing queries.  In my case, it ended up being CN=svn svn,OU=MyOu,DC=domain,DC=org.  Why the double svn, you might ask?  That’s because the first and last name fields are set to svn and by default, the distinguished name is the first and last name fields!  That’s important.  Its NOT the username or account name!  Again, use ADSI edit, browse to the username you want to use, right click and select properties, and then search the attributes for the Distinguished Name.  Once you’ve found that, select it and click View and you can copy and paste that into this field. LDAP Bind Password:  This is the password for the account in the Bind DN LDAP login Attribute: sAMAccountName.  If you leave this blank, uid is used, which may not even be set.  This tells it to use the Account Name field that’s defined under the account tab for users in Active Directory Users and Computers.  Note that this attribute DOES NOT APPLY to the LDAP Bind DN.  You must use the full distinguished name of the bind DN.  This attribute allows users to type their username and password for authentication, rather than typing their distinguished name, which they probably don’t know. LDAP Search Scope:  Probably should stay at sub, but could be different depending on your situation. LDAP Filter:  I left mine blank, but you could provide one to limit what you want to see.  LDP would be helpful for determining what this is. LDAP Server Certificate Verification:  I left it checked, but didn’t try it without it being checked. Hopefully, this will save some others pain when trying to get Collabnet setup. Technorati Tags: Subversion,collabnet

    Read the article

  • Using the ASP.NET Membership API with SQL Server / SQL Azure: The new &ldquo;System.Web.Providers&rdquo; namespace

    - by Harish Ranganathan
    The Membership API came in .NET 2.0 and was a huge enhancement in building web applications with users, managing roles, permissions etc.,  The Membership API by default uses SQL Express and until Visual Studio 2008, it was available only through the ASP.NET Configuration manager screen (Website – ASP.NET Configuration) or (Project – ASP.NET Configuration) and for every application, one has to manually visit this place to start using the Security and other settings.  Upon doing that the default SQL Express database aspnet.mdf is created to store all the user profiles. Starting Visual Studio 2010 and .NET 4.0, the Default Website template includes the Membership API controls as a part of the page i.e. When you create a “File – New – ASP.NET Web Application” or an “ASP.NET MVC Application”, by default the Login/Register controls are enabled in the MasterPage and they are termed under “ApplicationServices” setting in the web.config file with connection string pointed to the SQL Express database. In fact, when you run the default website and click on “Logon” –> “Register”, and enter the details for registration and click “Register”, that is the time the aspnet.mdf file is created with the tables for Users, Roles, UsersInRoles, Profile etc., Now, this uses the default SQL Express database within the App_Data folder.  If you want to move your Membership information to some other database such as SQL Server, SQL CE or SQL Azure, you need to manually run the aspnet_regsql command and specify the destination database name. This would create all the Tables, Procedures and Views required to handle the Membership information.  Thereafter you can change the connection string for “ApplicationServices” to point to the database where you had run all the scripts. Now, enter “System.Web.Providers” Alpha. This is available as a part of the NuGet package library.  Scott Hanselman has a neat post describing the steps required to get it up and running as well as doing the basic changes  at http://www.hanselman.com/blog/IntroducingSystemWebProvidersASPNETUniversalProvidersForSessionMembershipRolesAndUserProfileOnSQLCompactAndSQLAzure.aspx Pretty much, it covers what the new System.Web.Providers do. One thing I wanted to clarify is that, the new “System.Web.Providers” add a lot of new settings which are also marked as the defaults, in the web.config.  Even now, they use SQL Express as the default database.  But, if you change the connection string for “DefaultConnection” under connectionStrings to point to your SQL Server or SQL Azure, Membership API would now be able to create all the tables, procedures and views at the destination specified (i.e. SQL Server or SQL Azure). In my case, I modified the DefaultConneciton to point to my SQL Azure database.  Next, I hit F5 to run the application.  The default view loads.  I clicked on “LogOn” and then “Register” since I knew there are no tables/users as of then.  One thing to note is that, I had put “NewDB” as the database name in the connection string that points to SQL Azure.  NewDB wasn’t existing and I would assume it would be created before the tables/views/procedures for Membership are created. Once I clicked on the “Register” to register my first username, it took a while and then registered as well as logged in me in.  Also, I went to the SQL Azure Management Portal and verified that there exists “NewDB” which has just been created I could also connect to the SQL Azure database “NewDB” from Management Studio and found that the tables now don’t have the aspnet_ prefix.  The tables were simply Users, Roles, UsersInRoles, Profiles etc., So, with a few clicks and configuration change, I could actually set up the user base for my application on SQL Azure and even make the SessionState, Roles, Profiles being stored in SQL Azure database. The new System.Web.Proivders also required MARS (MultipleActiveResultSets=true) setting since it uses Entity Framework for the DAL operations.  Also, the “Project – ASP.NET Configuration” screen can be used to further create/manage users/roles etc., although the data is stored on the remote database. With that, a long pending request from the community to have the ability to configure and use remote databases for Application users management without having to run the scripts from SQL Express is fulfilled. Cheers !!!

    Read the article

  • Changing the action of a hyperlink in a Silverlight RichTextArea

    - by Marc Schluper
    The title of this post could also have been "Move over Hyperlink, here comes Actionlink" or "Creating interactive text in Silverlight." But alas, there can be only one. Hyperlinks are very useful. However, they are also limited because their action is fixed: browse to a URL. This may have been adequate at the start of the Internet, but nowadays, in web applications, the one thing we do not want to happen is a complete change of context. In applications we typically like a hyperlink selection to initiate an action that updates a part of the screen. For instance, if my application has a map displayed with some text next to it, the map would react to a selection of a hyperlink in the text, e.g. by zooming in on a location and displaying additional locational information in a popup. In this way, the text becomes interactive text. It is quite common that one company creates and maintains websites for many client companies. To keep maintenance cost low, it is important that the content of these websites can be updated by the client companies themselves, without the need to involve a software engineer. To accommodate this scenario, we want the author of the interactive text to configure all hyperlinks (without writing any code). In a Silverlight RichTextArea, the default action of a Hyperlink is the same as a traditional hyperlink, but it can be changed: if the Command property has a value then upon a click event this command is called with the value of the CommandParameter as parameter. How can we let the author of the text specify a command for each hyperlink in the text, and how can we let an application react properly to a hyperlink selection event? We are talking about any command here. Obviously, the application would recognize only a specific set of commands, with well defined parameters, but the approach we take here is generic in the sense that it pertains to the RichTextArea and any command. So what do we require? We wish that: As a text author, I can configure the action of a hyperlink in a (rich) text without writing code; As a text author, I can persist the action of a hyperlink with the text; As a reader of persisted text, I can click a hyperlink and the configured action will happen; As an application developer, I can configure a control to use my application specific commands. In an excellent introduction to the RichTextArea, John Papa shows (among other things) how to persist a text created using this control. To meet our requirements, we can create a subclass of RichTextArea that uses John's code and allows plugging in two command specific components: one to prompt for a command definition, and one to execute the command. Since both of these plugins are application specific, our RichTextArea subclass should not assume anything about them except their interface. public interface IDefineCommand { void Prompt(string content, // the link content Action<string, object> callback); // the method called to convey the link definition } public interface IPerformCommand : ICommand {} The IDefineCommand plugin receives the content of the link (the text visible to the reader) and displays some kind of control that allows the author to define the link. When that's done, this (possibly changed) content string is conveyed back to the RichTextArea, together with an object that defines the command to execute when the link is clicked by the reader of the published text. The IPerformCommand plugin simply implements System.Windows.Input.ICommand. Let's use MEF to load the proper plugins. In the example solution there is a project that contains rudimentary implementations of these. The IDefineCommand plugin simply prompts for a command string (cf. a command line or query string), and the IPerformCommand plugin displays a MessageBox showing this command string. An actual application using this extended RichTextArea would have its own set of commands, each having their own parameters, and hence would provide more user friendly application specific plugins. Nonetheless, in any case a command can be persisted as a string and hence the two interfaces defined above suffice. For a Visual Studio 2010 solution, see my article on The Code Project.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 22 (sys.dm_db_index_physical_stats)

    - by Tamarick Hill
    The sys.dm_db_index_physical_stats Dynamic Management Function is used to return information about the fragmentation levels, page counts, depth, number of levels, record counts, etc. about the indexes on your database instance. One row is returned for each level in a given index, which we will discuss more later. The function takes a total of 5 input parameters which are (1) database_id, (2) object_id, (3) index_id, (4) partition_number, and (5) the mode of the scan level that you would like to run. Let’s use this function with our AdventureWorks2012 database to better illustrate the information it provides. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, NULL) As you can see from the result set, there is a lot of beneficial information returned from this DMF. The first couple of columns in the result set (database_id, object_id, index_id, partition_number, index_type_desc, alloc_unit_type_desc) are either self-explanatory or have been explained in our previous blog sessions so I will not go into detail about these at this time. The next column in the result set is the index_depth which represents how deep the index goes. For example, If we have a large index that contains 1 root page, 3 intermediate levels, and 1 leaf level, our index depth would be 5. The next column is the index_level which refers to what level (of the depth) a particular row is referring to. Next is probably one of the most beneficial columns in this result set, which is the avg_fragmentation_in_percent. This column shows you how fragmented a particular level of an index may be. Many people use this column within their index maintenance jobs to dynamically determine whether they should do REORG’s or full REBUILD’s of a given index. The fragment count represents the number of fragments in a leaf level while the avg_fragment_size_in_pages represents the number of pages in a fragment. The page_count column tells you how many pages are in a particular index level. From my result set above, you see the the remaining columns all have NULL values. This is because I did not specify a ‘mode’ in my query and as a result it used the ‘LIMITED’ mode by default. The LIMITED mode is meant to be lightweight so it does collect information for every column in the result set. I will re-run my query again using the ‘DETAILED’ mode and you will see we now have results for these rows. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, ‘DETAILED’)   From the remaining columns, you see we get even more detailed information such as how many records are in a particular index level (record_count). We have a column for ghost_record_count which represents the number of records that have been marked for deletion, but have not physically been removed by the background ghost cleanup process. We later see information on the MIN, MAX, and AVG record size in bytes. The forwarded_record_count column refers to records that have been updated and now no longer fit within the row on the page anymore and thus have to be moved. A forwarded record is left in the original location with a pointer to the new location. The last column in the result set is the compressed_page_count column which tells you how many pages in your index have been compressed. This is a very powerful DMF that returns good information about the current indexes in your system. However, based on the mode you select, it could be a very resource intensive function so be careful with how you use it. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188917.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • No more internet connection after update in 14.04 with Intel Dual Band Wireless AC 7260

    - by luis
    My Dell XPS 15 (haswell) was working fine until I stupidly accepted recently to apply Ubuntu updates. Since then, my wifi does not work (it shows "device not managed" when clicking wifi icon in toolbar). Even USB to Ethernet adapter does not seem to work. Bluetooth at least "sees" other bluetooth devices around... See below output from dmesg (dmesg |grep iwl) : [ 886.462459] iwlwifi 0000:06:00.0: irq 51 for MSI/MSI-X [ 886.462561] iwlwifi 0000:06:00.0: Direct firmware load failed with error -2 [ 886.462562] iwlwifi 0000:06:00.0: Falling back to user helper [ 886.463284] iwlwifi 0000:06:00.0: loaded firmware version 22.1.7.0 op_mode iwlmvm [ 886.475345] iwlwifi 0000:06:00.0: Detected Intel(R) Dual Band Wireless AC 7260, REV=0x144 [ 886.475433] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 886.475684] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 886.689214] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' Below the output from modinfo iwlwifi: filename: /lib/modules/3.13.0-29- generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko license: GPL author: Copyright(c) 2003-2013 Intel Corporation <[email protected]> version: in-tree: description: Intel(R) Wireless WiFi driver for Linux firmware: iwlwifi-100-5.ucode firmware: iwlwifi-1000-5.ucode firmware: iwlwifi-135-6.ucode firmware: iwlwifi-105-6.ucode firmware: iwlwifi-2030-6.ucode firmware: iwlwifi-2000-6.ucode firmware: iwlwifi-5150-2.ucode firmware: iwlwifi-5000-5.ucode firmware: iwlwifi-6000g2b-6.ucode firmware: iwlwifi-6000g2a-5.ucode firmware: iwlwifi-6050-5.ucode firmware: iwlwifi-6000-4.ucode firmware: iwlwifi-3160-7.ucode firmware: iwlwifi-7260-7.ucode srcversion: 1E6912E109D5A43B310FB34 alias: pci:v00008086d0000095Asv*sd00005490bc*sc*i* (a pack of lines of kind "alias: pci:xxxxx...." that I guess are not helpful) alias: pci:v00008086d0000095Bsv*sd00005290bc*sc*i* depends: cfg80211 intree: Y vermagic: 3.13.0-29-generic SMP mod_unload modversions signer: Magrathea: Glacier signing key sig_key: 66:02:CB:36:F1:31:3B:EA:01:C4:BD:A9:65:67:CF:A7:23:C9:70:D8 sig_hashalgo: sha512 parm: swcrypto:using crypto in software (default 0 [hardware]) (int) parm: 11n_disable:disable 11n functionality, bitmap: 1: full, 2: disable agg TX, 4: disable agg RX, 8 enable agg TX (uint) parm: amsdu_size_8K:enable 8K amsdu size (default 0) (int) parm: fw_restart:restart firmware in case of error (default true) (bool) parm: antenna_coupling:specify antenna coupling in dB (defualt: 0 dB) (int) parm: wd_disable:Disable stuck queue watchdog timer 0=system default, 1=disable, 2=enable (default: 0) (int) parm: nvm_file:NVM file name (charp) parm: bt_coex_active:enable wifi/bt co-exist (default: enable) (bool) parm: led_mode:0=system default, 1=On(RF On)/Off(RF Off), 2=blinking, 3=Off (default: 0) (int) parm: power_save:enable WiFi power management (default: disable) (bool) parm: power_level:default power save level (range from 1 - 5, default: 1) (int) I downloaded the latest versions of iwlwifi firmware from git (git clone git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git; copy iwlwifi-3160-9.ucode iwlwifi-7260-9.ucode iwlwifi-7265-9.ucode to /lib/firmware and reboot) but as you can imagine it did not help. Update #1: Downloaded from http://wireless.kernel.org/en/users/Drivers/iwlwifi?action=AttachFile&do=get&target=iwlwifi-7260-ucode-22.15.8.0.tgz and copied the file into /lib/firmware. After reloading it with modprobe, it seems to be OK: [ 14.761283] iwlwifi 0000:06:00.0: enabling device (0000 -> 0002) [ 14.761472] iwlwifi 0000:06:00.0: irq 51 for MSI/MSI-X [ 14.772478] iwlwifi 0000:06:00.0: loaded firmware version 22.15.8.0 op_mode iwlmvm [ 14.800274] iwlwifi 0000:06:00.0: Detected Intel(R) Dual Band Wireless AC 7260, REV=0x144 [ 14.800349] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 14.800657] iwlwifi 0000:06:00.0: L1 Enabled; Disabling L0S [ 15.007048] ieee80211 phy0: Selected rate control algorithm 'iwl-mvm-rs' However, clicking in wifi in the toolbar still shows "device not managed". Any clues? Many thanks! Luis

    Read the article

  • A Closer Look at the HiddenInput Attribute in MVC 2

    - by Steve Michelotti
    MVC 2 includes an attribute for model metadata called the HiddenInput attribute. The typical usage of the attribute looks like this (line #3 below): 1: public class PersonViewModel 2: { 3: [HiddenInput(DisplayValue = false)] 4: public int? Id { get; set; } 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: } So if you displayed your PersonViewModel with Html.EditorForModel() or Html.EditorFor(m => m.Id), the framework would detect the [HiddenInput] attribute metadata and produce HTML like this: 1: <input id="Id" name="Id" type="hidden" value="21" /> This is pretty straight forward and allows an elegant way to keep the technical key for your model (e.g., a Primary Key from the database) in the HTML so that everything will be wired up correctly when the form is posted to the server and of course not displaying this value visually to the end user. However, when I was giving a recent presentation, a member of the audience asked me (quite reasonably), “When would you ever set DisplayValue equal to true when using a HiddenInput?” To which I responded, “Well, it’s an edge case. There are sometimes when…er…um…people might want to…um…display this value to the user.” It was quickly apparent to me (and I’m sure everyone else in the room) what a terrible answer this was. I realized I needed to have a much better answer here. First off, let’s look at what is produced if we change our view model to use “true” (which is equivalent to use specifying [HiddenInput] since “true” is the default) on line #3: 1: public class PersonViewModel 2: { 3: [HiddenInput(DisplayValue = true)] 4: public int? Id { get; set; } 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: } Will produce the following HTML if rendered from Htm.EditorForModel() in your view: 1: <div class="editor-label"> 2: <label for="Id">Id</label> 3: </div> 4: <div class="editor-field"> 5: 21<input id="Id" name="Id" type="hidden" value="21" /> 6: <span class="field-validation-valid" id="Id_validationMessage"></span> 7: </div> The key is line #5. We get the text of “21” (which happened to be my DB Id in this instance) and also a hidden input element (again with “21”). So the question is, why would one want to use this? The best answer I’ve found is contained in this MVC 2 whitepaper: When a view lets users edit the ID of an object and it is necessary to display the value as well as to provide a hidden input element that contains the old ID so that it can be passed back to the controller. Well, that actually makes sense. Yes, it seems like something that would happen *rarely* but, for those instances, it would enable them easily. It’s effectively equivalent to doing this in your view: 1: <%: Html.LabelFor(m => m.Id) %> 2: <%: Model.Id %> 3: <%: Html.HiddenFor(m => m.Id) %> But it’s allowing you to specify it in metadata on your view model (and thereby take advantage of templated helpers like Html.EditorForModel() and Html.EditorFor()) rather than having to explicitly specifying everything in your view.

    Read the article

  • SQL SERVER – How to Ignore Columnstore Index Usage in Query

    - by pinaldave
    Earlier I wrote about SQL SERVER – Fundamentals of Columnstore Index and very first question I received in email was as following. “We are using SQL Server 2012 CTP3 and so far so good. In our data warehouse solution we have created 1 non-clustered columnstore index on our large fact table. We have very unique situation but your article did not cover it. We are running few queries on our fact table which is working very efficiently but there is one query which earlier was running very fine but after creating this non-clustered columnstore index this query is running very slow. We dropped the columnstore index and suddenly this one query is running fast but other queries which were benefited by this columnstore index it is running slow. Any workaround in this situation?” In summary the question in simple words “How can we ignore using columnstore index in selective queries?” Very interesting question – you can use I can understand there may be the cases when columnstore index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the columnstore index. SQL Server Engine will use any other index which is best after ignoring the columnstore index. Here is the quick script to prove the same. We will first create sample database and then create columnstore index on the same. Once columnstore index is created we will write simple query. This query will use columnstore index. We will then show the usage of the query hint. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO Now we have created columnstore index so if we run following query it will use for sure the same index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO We can specify Query Hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX as described in following query and it will not use columnstore index. -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) GO Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO Again, make sure that you use hint sparingly and understanding the proper implication of the same. Make sure that you test it with and without hint and select the best option after review of your administrator. Here is the question for you – have you started to use SQL Server 2012 for your validation and development (not on production)? It will be interesting to know the answer. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark (TOTD #184)

    - by arungupta
    TOTD #183 explained how to build a WebSocket-driven application using GlassFish 4. This Tip Of The Day (TOTD) will explain how do view/debug on-the-wire messages, or frames as they are called in WebSocket parlance, over this upgraded connection. This blog will use the application built in TOTD #183. First of all, make sure you are using a browser that supports WebSocket. If you recall from TOTD #183 then WebSocket is combination of Protocol and JavaScript API. A browser supporting WebSocket, or not, means they understand your web pages with the WebSocket JavaScript. caniuse.com/websockets provide a current status of WebSocket support in different browsers. Most of the major browsers such as Chrome, Firefox, Safari already support WebSocket for the past few versions. As of this writing, IE still does not support WebSocket however its planned for a future release. Viewing WebSocket farmes require special settings because all the communication happens over an upgraded HTTP connection over a single TCP connection. If you are building your application using Java, then there are two common ways to debug WebSocket messages today. Other language libraries provide different mechanisms to log the messages. Lets get started! Chrome Developer Tools provide information about the initial handshake only. This can be viewed in the Network tab and selecting the endpoint hosting the WebSocket endpoint. You can also click on "WebSockets" on the bottom-right to show only the WebSocket endpoints. Click on "Frames" in the right panel to view the actual frames being exchanged between the client and server. The frames are not refreshed when new messages are sent or received. You need to refresh the panel by clicking on the endpoint again. To see more detailed information about the WebSocket frames, you need to type "chrome://net-internals" in a new tab. Click on "Sockets" in the left navigation bar and then on "View live sockets" to see the page. Select the box with the address to your WebSocket endpoint and see some basic information about connection and bytes exchanged between the client and the endpoint. Clicking on the blue text "source dependency ..." shows more details about the handshake. If you are interested in viewing the exact payload of WebSocket messages then you need a network sniffer. These tools are used to snoop network traffic and provide a lot more details about the raw messages exchanged over the network. However because they provide lot more information so they need to be configured in order to view the relevant information. Wireshark (nee Ethereal) is a pretty standard tool for sniffing network traffic and will be used here. For this blog purpose, we'll assume that the WebSocket endpoint is hosted on the local machine. These tools do allow to sniff traffic across the network though. Wireshark is quite a comprehensive tool and we'll capture traffic on the loopback address. Start wireshark, select "loopback" and click on "Start". By default, all traffic information on the loopback address is displayed. That includes tons of TCP protocol messages, applications running on your local machines (like GlassFish or Dropbox on mine), and many others. Specify "http" as the filter in the top-left. Invoke the application built in TOTD #183 and click on "Say Hello" button once. The output in wireshark looks like Here is a description of the messages exchanged: Message #4: Initial HTTP request of the JSP page Message #6: Response returning the JSP page Message #16: HTTP Upgrade request Message #18: Upgrade request accepted Message #20: Request favicon Message #22: Responding with favicon not found Message #24: Browser making a WebSocket request to the endpoint Message #26: WebSocket endpoint responding back You can also use Fiddler to debug your WebSocket messages. How are you viewing your WebSocket messages ? Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Converting an Oracle VM VirtualBox VM into an Oracle VM Server image

    - by wim.coekaerts
    As we are working on tighter seemless moving of VM's between the 2 products, here are a few simple steps to convert an existing Oracle VM VirtualBox image over. Steps involved to make it easy/straightforward : (1) When creating a VM in Virtualbox, using Oracle Linux as an example, make sure that /etc/fstab only uses labels. Do not use hardcoded device names. instead of an entry /dev/sda1 /u01 ext3 defaults 1 1 use LABEL=foo /u01 ext3 defaults 1 1 for more info on labels : man e2label or use a logical volume /dev/VolGroup00/LVfoo /u01 ext3 defaults 1 1 Doing so will make it easier to have an OS boot up on a different hypervisor with potentially different device names. For instance, the VirtualBox VM might expose a scsi driver while in Oracle VM Server you might end up with an ide disk, this then changes /dev/sda to /dev/hda. (2) If you have a VM created that you want to convert, then shut down the VM in VirtualBox and convert the image files : go the the directory that contains your HardDisk image files (.VirtualBox/HardDisks/* as an example) for each of the virtual disks run the following command : VBoxManage clonehd virtualdiskfilename.vdi system.img --format raw where virtualdiskfilename.vdi is the original VBox VM file (this can also be a vmdk file) and system.img is the name of the virtualdisk for Oracle VM. this can be any filename as well, I typically use system.img to specify the boot disk (as is common for Oracle VM template creation) (3) create a vm.cfg To run a VM converted from VirtualBox, you have to create a vm.cfg for Oracle VM server that creates an HVM guest. The easiest is to use a simple hvm vm.cfg and change it for your vm. I have an example here : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' disk = ['file:system.img,hda,w', 'file:oracle.img,hdb,w',',hdc:cdrom,r',] kernel = '/usr/lib/xen/boot/hvmloader' memory = '1024' name = 'vmname' on_crash = 'restart' on_reboot = 'restart' pae = 1 serial = 'pty' timer_mode = '0' usbdevice = 'tablet' vcpus = 1 vif = ['bridge=xenbr0,type=ioemu'] vif_other_config = [] vnc = 1 vncconsole = 1 vnclisten = '0.0.0.0' vncpasswd = '' vncunused = 1 If you take the above vm.cfg, all you need to do - modify disk = (add your virtual disks in there) - modify memory = (amount of memory your VM needs) - modify name = (enter a name for your VM here) - modify vif = (might want to replace bridge=xenbr0 to the bridge you want to use) if you want more than 1 vcpu or other changes of course you have to make those as well. (4) copy this set of files onto your Oracle VM server or onto a webserver in a subdirectory and import the template through Oracle VM Manager. You can also just start the vm using xm create vm.cfg if you like. And that's it. As I said, we are working on automation around all this but it is relatively trivial to convert VM's over as long as you take the basic issues into account. Primarily the set up of the filesystems and the use of labels in /etc/fstab. There are other potential things to look at, such as network config. If you want to make that part clean then prior to shutting down the VM change /etc/modprobe.conf and/or add the mac address of the VM into the vm.cfg in the vifs line. The good thing, at least with Linux, is that even tho the virtual hardware changes, Linux will deal with it just fine (e1000 vs 8139 realtek, ide vs scsi etc). hope this helps.

    Read the article

  • How To Delete Built-in Windows 7 Power Plans (and Why You Probably Shouldn’t)

    - by The Geek
    Do you actually use the Windows 7 power management features? If so, have you ever wanted to just delete one of the built-in power plans? Here’s how you can do so, and why you probably should leave it alone. Just in case you’re new to the party, we’re talking about the power plans that you see when you click on the battery/plug icon in the system tray. The problem is that one of the built-in plans always shows up there, even if you only use custom plans. When you go to “More power options” on the menu there, you’ll be taken to a list of them, but you’ll be unable to get rid of any of the built-in ones, even if you have your own. You can actually delete the power plans, but it will probably cause problems, so we highly recommend against it. If you still want to proceed, keep reading. Delete Built-in Power Plans in Windows 7 Open up an Administrator mod command prompt by right-clicking on the command prompt and choosing “Run as Administrator”, then type in the following command, which will show you a whole list of the plans. powercfg list Do you see that really long GUID code in the middle of each listing? That’s what we’re going to need for the next step. To make it easier, we’ll provide the codes here, just in case you don’t know how to copy to the clipboard from the command prompt. Power Scheme GUID: 381b4222-f694-41f0-9685-ff5bb260df2e  (Balanced) Power Scheme GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c  (High performance)Power Scheme GUID: a1841308-3541-4fab-bc81-f71556f20b4a  (Power saver) Before you do any deleting, what you’re going to want to do is export the plan to a file using the –export parameter. For some unknown reason, I used the .xml extension when I did this, though the file isn’t in XML format. Moving on… here’s the syntax of the command: powercfg –export balanced.xml 381b4222-f694-41f0-9685-ff5bb260df2e This will export the Balanced plan to the file balanced.xml. And now, we can delete the plan by using the –delete parameter, and the same GUID.  powercfg –delete 381b4222-f694-41f0-9685-ff5bb260df2e If you want to import the plan again, you can use the -import parameter, though it has one weirdness—you have to specify the full path to the file, like this: powercfg –import c:\balanced.xml Using what you’ve learned, you can export each of the plans to a file, and then delete the ones you want to delete. Why Shouldn’t You Do This? Very simple. Stuff will break. On my test machine, for example, I removed all of the built-in plans, and then imported them all back in, but I’m still getting this error anytime I try to access the panel to choose what the power buttons do: There’s a lot more error messages, but I’m not going to waste your time with all of them. So if you want to delete the plans, do so at your own peril. At least you’ve been warned! Similar Articles Productive Geek Tips Learning Windows 7: Manage Power SettingsCreate a Shortcut or Hotkey to Switch Power PlansDisable Power Management on Windows 7 or VistaChange the Windows 7 or Vista Power Buttons to Shut Down/Sleep/HibernateDisable Windows Vista’s Built-in CD/DVD Burning Features TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >