Search Results

Search found 69987 results on 2800 pages for 'wcf data services'.

Page 139/2800 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • WCF Ria Services For Real

    In my previous post I discussed creating the database and tables for the Silverlight HVP configuration data.  All that was great, and worked just dandy until it was time to get the data from the database server to the application running on the client. But, I thought, How hard can it be?  Ive done a few mini-tutorials should be straight-forward And it was sorta. Getting Going The steps are pretty straight forward: Create an entity data model that corresponds to the tables ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WCF Error when using “Match Data” function in MDS Excel AddIn

    - by Davide Mauri
    If you’re using MDS and DQS with the Excel Integration you may get an error when trying to use the “Match Data” feature that uses DQS in order to help to identify duplicate data in your data set. The error is quite obscure and you have to enable WCF error reporting in order to have the error details and you’ll discover that they are related to some missing permission in MDS and DQS_STAGING_DATA database. To fix the problem you just have to give the needed permession, as the following script does: use MDS go GRANT SELECT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT INSERT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT DELETE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT UPDATE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] USE [DQS_STAGING_DATA] GO ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_datawriter] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_ddladmin] TO [VMSRV02\mdsweb] GO Where “VMSRV02\mdsweb” is the user you configured for MDS Service execution. If you don’t remember it, you can just check which account has been assigned to the IIS application pool that your MDS website is using:

    Read the article

  • The Case of the Extra Page: Rendering Reporting Services as PDF

    - by smisner
    I had to troubleshoot a problem with a mysterious extra page appearing in a PDF this week. My first thought was that it was likely to caused by one of the most common problems that people encounter when developing reports that eventually get rendered as PDF is getting blank pages inserted into the PDF document. The cause of the blank pages is usually related to sizing. You can learn more at Understanding Pagination in Reporting Services in Books Online. When designing a report, you have to be really careful with the layout of items in the body. As you move items around, the body will expand to accommodate the space you're using and you might eventually tighten everything back up again, but the body doesn't automatically collapse. One of my favorite things to do in Reporting Services 2005 - which I dubbed the "vacu-pack" method - was to just erase the size property of the Body and let it auto-calculate the new size, squeezing out all the extra space. Alas, that method no longer works beginning with Reporting Services 2008. Even when you make sure the body size is as small as possible (with no unnecessary extra space along the top, bottom, left, or right side of the body), it's important to calculate the body size plus header plus footer plus the margins and ensure that the calculated height and width do not exceed the report's height and width (shown as the page in the illustration above). This won't matter if users always render reports online, but they'll get extra pages in a PDF document if the report's height and width are smaller than the calculate space. Beginning the Investigation In the situation that I was troubleshooting, I checked the properties: Item Property Value Body Height 6.25in   Width 10.5in Page Header Height 1in Page Footer Height 0.25in Report Left Margin 0.1in   Right Margin 0.1in   Top Margin 0.05in   Bottom Margin 0.05in   Page Size - Height 8.5in   Page Size - Width 11in So I calculated the total width using Body Width + Left Margin + Right Margin and came up with a value of 10.7 inches. And then I calculated the total height using Body Height + Page Header Height + Page Footer Height + Top Margin + Bottom Margin and got 7.6 inches. Well, page sizing couldn't be the reason for the extra page in my report because 10.7 inches is smaller than the report's width of 11 inches and 7.6 inches is smaller than the report's height of 8.5 inches. I had to look elsewhere to find the culprit. Conducting the Third Degree My next thought was to focus on the rendering size of the items in the report. I've adapted my problem to use the Adventure Works database. At the top of the report are two charts, and then below each chart is a rectangle that contains a table. In the real-life scenario, there were some graphics present as a background for the tables which fit within the rectangles that were about 3 inches high so the visual space of the rectangles matched the visual space of the charts - also about 3 inches high. But there was also a huge amount of white space at the bottom of the page, and as I mentioned at the beginning of this post, a second page which was blank except for the footer that appeared at the bottom. Placing a textbox beneath the rectangles to see if they would appear on the first page resulted the textbox's appearance on the second page. For some reason, the rectangles wanted a buffer zone beneath them. What's going on? Taking the Suspect into Custody My next step was to see what was really going on with the rectangle. The graphic appeared to be correctly sized, but the behavior in the report indicated the rectangle was growing. So I added a border to the rectangle to see what it was doing. When I added borders, I could see that the size of each rectangle was growing to accommodate the table it contains. The rectangle on the right is slightly larger than the one on the left because the table on the right contains an extra row. The rectangle is trying to preserve the whitespace that appears in the layout, as shown below. Closing the Case Now that I knew what the problem was, what could I do about it? Because of the graphic in the rectangle (not shown), I couldn't eliminate the use of the rectangles and just show the tables. But fortunately, there is a report property that comes to the rescue: ConsumeContainerWhitespace (accessible only in the Properties window). I set the value of this property to True. Problem solved. Now the rectangles remain fixed at the configured size and don't grow vertically to preserve the whitespace. Case closed.

    Read the article

  • Ajax application: using SOAP vs REST ?

    - by coder
    I'm building an ajax heavy application (client-side strictly html/css/js) which will be getting all the data and using server business logic via webservices. I know REST seems to be the hot topic but I can't find any good arguments. The main argument seems to be its "light-weight". My impression so far is that wsdl/soap based services are more expressive and allow for more a more complex transfer of data. It appears that soap would be more useful in the application I'm building where the only code consuming the services will be the js downloaded in the client browser. REST on the other hand seems to have a smaller entry barrier and so can be more useful for services like twitter in allowing other developers to consume these services easily. Also, REST seems to Te better suited for simple data transfers. So in summary SOAP is useful for complex data transfer and REST is useful in simple data transfer. I'm currently under the impression that using SOAP would be best due to the complexity of the messages but perhaps there's other factors. What are your thoughts on the pros/cons of soap/rest for a heavy ajax web app? EDIT: While the wsdl is in xml, the data I'm transferring back and forth is actually in JSON. It just appears more natural to use wsdl/soap here due to the nature of the app. The verbs GET and POST may not be enough. I may want to say something like: processQueue, or executeTimer. This is why my conclusion has been wsdl/soap would be good for bridging a complex layer between two applications (client and server) whereas REST would be better (due to its simplicity) for allowing many developer-users to consume resources programmatically. So you could say the choice falls along two lines Will the app be verb-oriented (completing tasks: use soap) or noun-oriented (consuming resources: use REST) Will the api be consumed by few developers or many developers (REST is strong for many developers)? Since such an ajax heavy app would potentially use many verbs and would only be used by the client developer it appears soap/wsdl would be the best fit.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Unable to Sign in to the Microsoft Online Services Signin application from Windows 7 client located behind ISA firewall

    - by Ravindra Pamidi
    A while ago i helped a customer troubleshoot authentication problem with Microsoft Online Services Signin application.  This customer was evaluating Microsoft BPOS (Business Productivity Online Services) and was having trouble using the single sign on application behind ISA 2004 firewall.The network structure is fairly simple with single Windows 2003 Active Directory domain and Windows 7 clients. On a successful logon to the Microsoft Online Services Signin application, this application provides single signon functionality to all of Microsoft online services in the BPOS package. Symptoms:When trying to signin it fails with error "The service is currently unavailable. Please try again later. If problems continue, contact your service administrator". If ISA 2004 firewall is removed from the picture the authentication succeeds.Troubleshooting: Enabled ISA Server firewall logging along with Microsoft Network Monitor tool on the Windows 7 Client while reproducing the issue. Analysis of the ISA Server Firewall logs and Microsoft Network capture revealed that the Microsoft Online Services Sign In application when sending request to ISA Server does not send the domain credentials and as a result ISA Server responds with an error code of HTTP 407 Proxy authentication required listing out the supported authentication mechanisms.  The application in question is expected to send the credentials of the domain user in response to this request. However in this case, it fails to send the logged on user's domain credentials. Bit of researching on the Internet revealed that The "Microsoft Online Services Sign In" application by default does not support Outbound Internet Proxy authentication. In order for it to send the logged on user's domain credentials we had to make  changes to its configuration file "SignIn.exe.config" located under "Program Files\Microsoft Online Services\Sign In" folder. Step by Step details to configure the configuration file are documented on Microsoft TechNet website given below.  Configure your outbound authenticating proxy serverhttp://www.microsoft.com/online/help/en-us/helphowto/cc54100d-d149-45a9-8e96-f248ecb1b596.htm After the above problem was addressed we were still not able to use the "Microsoft Online Services Sign In" application and it failed with the same error.  Analysis of another network capture revealed that the application in question is now sending the required credentials and the connection seems to terminate at a later stage. Enabled verbose logging for the "Microsoft Online Services Sign In" application and then reproduced the problem. Analysis of the logs revealed a time difference between the local client and Microsoft Online services server of around seven minutes which is above the acceptable time skew of five minutes. Excerpt from Microsoft Online Services Sign In application verbose log:  1/26/2012 1:57:51 PM Verbose SingleSignOn.GetSSOGenericInterface SSO Interface URL: https://signinservice.apac.microsoftonline.com/ssoservice/UID1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn The security timestamp is invalid because its creation time ('2012-01-26T08:34:52.767Z') is in the future. Current time is '2012-01-26T08:27:52.987Z' and allowed clock skew is '00:05:00'.1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn  Although the Windows 7 Clients successfully synchronized time to the domain controller for the domain, the domain controller was not configured to synchronize time with external NTP servers. This caused a gradual drift in time on the network thus resulting in the above issue. Reconfigured the domain controller holding the PDC FSMO role to synchronize time with external time source ( time.nist.gov ) and edited the system policy on the ISA server firewall to allow NTP traffic to time.nist.gov Configure the time source for the forest:Windows Time Servicehttp://technet.microsoft.com/en-us/library/cc794937(WS.10).aspx Forced synchronization of Windows time using the command w32tm /resync on the domain controller and later on the clients each of which had corrected the seven minutes difference. This resolved the problem with logon to Microsoft Online Services Sign In.

    Read the article

  • for an ajax heavy web application which would be better SOAP or REST?

    - by coder
    I'm building an ajax heavy application (client-side strictly html/css/js) which will be getting all the data and using server business logic via webservices. I know REST seems to be the hot topic but I can't find any good arguments. The main argument seems to be its "light-weight". My impression so far is that wsdl/soap based services are more expressive and allow for more a more complex transfer of data. It appears that soap would be more useful in the application I'm building where the only code consuming the services will be the js downloaded in the client browser. REST on the other hand seems to have a smaller entry barrier and so can be more useful for services like twitter in allowing other developers to consume these services easily. Also, REST seems to Te better suited for simple data transfers. So in summary SOAP is useful for complex data transfer and REST is useful in simple data transfer. I'm currently under the impression that using SOAP would be best due to the complexity of the messages but perhaps there's other factors. What are your thoughts on the pros/cons of soap/rest for a heavy ajax web app?

    Read the article

  • thick client migration to web based application

    - by user1151597
    This query is related to application design the technology that I should consider during migration. The Scenario: I have a C#.net Winform application which communicates with a device. One of the main feature of this application is monitoring cyclic data(rate 200ms) sent from the device to the application. The request to start the cyclic data is sent only once in the beginning and then the application starts receiving data from the device until it sends a stop request. Now this same application needs to be deployed over the web in a intranet. The application is composed of a business logic layer and a communication layer which communicates with the device through UDP ports. I am trying to look at a solution which will allow me to have a single instance of the application on the server so that the device thinks that it is connected as usual and then from the business logic layer I can manage the clients. I want to reuse the code of the business layer and the communication layer as much as possible. Please let me know if webserives/WCF/ etc what i should consider to design the migration. Thanks in advance.

    Read the article

  • Some hint to program a webservice "by subscription"

    - by Eagle
    I have some web sites programmed, I know to do it with python and PHP basically. Normally they are simple web sites, but now I want to provide REST web services but only for allowed users (allowed by me). I saw that a lot of services uses the "KEY" and "SECRET_KEY" concepts, which seems to be what I need (if I understand it right). My suppositions are: If I only do a GET service to retrieve, e.g., all my clients, without anymore, anyone can retrieve my clients without limitations. I will need some KEY generator to provide keys for my allowed users, so they can use my webservices. Only with a KEY is not enough: someone can steal a KEY and supplant my user (and this is the reason because exists a SECRET_KEY, right?). If all this is right, how can I make/use a system like that in my web services? Some open source example? Or maybe there are another easy solutions I'm not considering? My objective is to allow some users to use my web services.

    Read the article

  • Silverlight 4 + RIA Services - Ready for Business: Exposing Data from Entity Framework

    To continue our series I wanted to look next at how to expose your data from the server side of your application.  The interesting data in your business applications come from a wide variety of data sources.  From a SQL Database, from Oracle DB, from Sql Azure, from Sharepoint, from a mainframe and you have likely already chosen a datamodel such as NHibernate, Linq2Sql, Entity Framework, Stored Proc, a service.   The goal of RIA Service in this release is to make it easy to...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WCF listenBacklog and maxConnections can't be set higher than 10. Why not?

    - by NeedWCFPro
    My service works great under low load. But under high load I start to get connection errors. I know about other settings but I am trying to change the listenBacklog parameter in particular for my TCP Buffered binding. If I set listenBacklog="10" I am able to telnet into the port where my WCF service is running. If I change listenBacklog to anything higher than 10 it will not let me telnet into my service when it is running. No errors seem to be thrown. What can I do? I get the same problem when I change my maxConnections away from 10. All other properties of the binding I can set higher without a problem. Here is what my binding looks like: <bindings> <netTcpBinding> <binding name="NetTcpBinding_IMyService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="1048576" maxConnections="10" maxReceivedMessageSize="1048576"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign"> </transport> <message clientCredentialType="Windows" /> </security> </binding> ... I really need to increase the values of maxConnections and listenBacklog

    Read the article

  • WCF, Metadata and BIGIP - Can I force the correct url for the WSDL items?

    - by Yossi Dahan
    We have a WCF service hosted on ServerA which is a server with no-direct Internet access and has a non-Internet routable IP address. The service is fronted by BIGIP which handles SSL encryption and decryption and forwards the unencrypted request to ServerA (at the moment it does NOT actually do any load balancing, but that is likely to be added in the future) on a specific port. What that means is that our clients would be calling the service through https://www.OurDomain.com/ServiceUrl and would get to our service on http://SeverA:85/ServiceUrl through the BIGIP device; When we browse to the WSDL published on https://www.OurDomain.com/ServiceUrl all the addresses contained in the WSDL are based on the http://SeverA:85/ServiceUrl base address We figured out that we could use the host headers setting to set the domain, but our problem is that while this would sort out the domain, we would still be using the wrong scheme – it would use http://www.OurDomain.com/ServiceUrl while we need it to be Https. Also – as we have other services (asmx based) hosted on that server we had some issues setting the host headers, and so we thought we could get away with creating another site on the server (using, say, port 82) and set the host header on that; now, on top of the http/https problem we have an issue as the WSDL contains the port number in all the urls, where BigIP works on port 443 (for the SSL) Is there a more flexible solution than implementing Host Headers? Ideally we need to retain flexibility and ease of supportability. Thanks for any help…

    Read the article

  • Why is WCF Stream response getting corrupted on write to disk?

    - by Alvin S
    I am wanting to write a WCF web service that can send files over the wire to the client. So I have one setup that sends a Stream response. Here is my code on the client: private void button1_Click(object sender, EventArgs e) { string filename = System.Environment.CurrentDirectory + "\\Picture.jpg"; if (File.Exists(filename)) File.Delete(filename); StreamServiceClient client = new StreamServiceClient(); int length = 256; byte[] buffer = new byte[length]; FileStream sink = new FileStream(filename, FileMode.CreateNew, FileAccess.Write); Stream source = client.GetData(); int bytesRead; while ((bytesRead = source.Read(buffer,0,length))> 0) { sink.Write(buffer,0,length); } source.Close(); sink.Close(); MessageBox.Show("All done"); } Everything processes fine with no errors or exceptions. The problem is that the .jpg file that is getting transferred is reported as being "corrupted or too large" when I open it. What am I doing wrong? On the server side, here is the method that is sending the file. public Stream GetData() { string filename = Environment.CurrentDirectory+"\\Chrysanthemum.jpg"; FileStream myfile = File.OpenRead(filename); return myfile; } I have the server configured with basicHttp binding with Transfermode.StreamedResponse.

    Read the article

  • nHibernate strategies in a web farm

    - by Pete Nelson
    Our current project at work is a new MVC web site that will use a WCF service primarily to access a 3rd party billing system via a web service as well as a small SQL database for user personalization. The WCF service uses nHibernate for the SQL database. We'd like to implement some sort of web farm for load balancing as well as failover and maintenance. I'm trying to decide the best way to handle nHibernate's caching and database concurrency if there are multiple WCF services running. Some scenarios I've been thinking about... 1) Multiple IIS servers, one WCF server. With this setup, the WCF server would be a single point of failure, but there would be no issues with nHibernate caching or database concurrency. 2) Multiple IIS servers, each with it's own WCF service. This removes a single point of failure, but now nHibernate on one machine would not know about database changes done by another machine. Some solutions to number 2 would be to use an IStatelessSession so we're not doing any caching and nHibernate is always fetching directly from the database. This might be the most feasible as our personalization database has very few objects in it. I'm also considering a 2nd-level cache such as memcached or Velocity, but it may be overkill for this system. I'm putting this out there to see if anyone has experience doing this sort of architecture and to get some ideas for a solution. Thanks!

    Read the article

  • JAVA image transfer problem

    - by user579098
    Hi, I have a school assignment, to send a jpg image,split it into groups of 100 bytes, corrupt it, use a CRC check to locate the errors and re-transmit until it eventually is built back into its original form. It's practically ready, however when I check out the new images, they appear with errors.. I would really appreciate if someone could look at my code below and maybe locate this logical mistake as I can't understand what the problem is because everything looks ok :S For the file with all the data needed including photos and error patterns one could download it from this link:http://rapidshare.com/#!download|932tl2|443122762|Data.zip|739 Thanks in advance, Stefan p.s dont forget to change the paths in the code for the image and error files package networks; import java.io.*; // for file reader import java.util.zip.CRC32; // CRC32 IEEE (Ethernet) public class Main { /** * Reads a whole file into an array of bytes. * @param file The file in question. * @return Array of bytes containing file data. * @throws IOException Message contains why it failed. */ public static byte[] readFileArray(File file) throws IOException { InputStream is = new FileInputStream(file); byte[] data=new byte[(int)file.length()]; is.read(data); is.close(); return data; } /** * Writes (or overwrites if exists) a file with data from an array of bytes. * @param file The file in question. * @param data Array of bytes containing the new file data. * @throws IOException Message contains why it failed. */ public static void writeFileArray(File file, byte[] data) throws IOException { OutputStream os = new FileOutputStream(file,false); os.write(data); os.close(); } /** * Converts a long value to an array of bytes. * @param data The target variable. * @return Byte array conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static byte[] toByta(long data) { return new byte[] { (byte)((data >> 56) & 0xff), (byte)((data >> 48) & 0xff), (byte)((data >> 40) & 0xff), (byte)((data >> 32) & 0xff), (byte)((data >> 24) & 0xff), (byte)((data >> 16) & 0xff), (byte)((data >> 8) & 0xff), (byte)((data >> 0) & 0xff), }; } /** * Converts a an array of bytes to long value. * @param data The target variable. * @return Long value conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static long toLong(byte[] data) { if (data == null || data.length != 8) return 0x0; return (long)( // (Below) convert to longs before shift because digits // are lost with ints beyond the 32-bit limit (long)(0xff & data[0]) << 56 | (long)(0xff & data[1]) << 48 | (long)(0xff & data[2]) << 40 | (long)(0xff & data[3]) << 32 | (long)(0xff & data[4]) << 24 | (long)(0xff & data[5]) << 16 | (long)(0xff & data[6]) << 8 | (long)(0xff & data[7]) << 0 ); } public static byte[] nextNoise(){ byte[] result=new byte[100]; // copy a frame's worth of data (or remaining data if it is less than frame length) int read=Math.min(err_data.length-err_pstn, 100); System.arraycopy(err_data, err_pstn, result, 0, read); // if read data is less than frame length, reset position and add remaining data if(read<100){ err_pstn=100-read; System.arraycopy(err_data, 0, result, read, err_pstn); }else // otherwise, increase position err_pstn+=100; // return noise segment return result; } /** * Given some original data, it is purposefully corrupted according to a * second data array (which is read from a file). In pseudocode: * corrupt = original xor corruptor * @param data The original data. * @return The new (corrupted) data. */ public static byte[] corruptData(byte[] data){ // get the next noise sequence byte[] noise = nextNoise(); // finally, xor data with noise and return result for(int i=0; i<100; i++)data[i]^=noise[i]; return data; } /** * Given an array of data, a packet is created. In pseudocode: * frame = corrupt(data) + crc(data) * @param data The original frame data. * @return The resulting frame data. */ public static byte[] buildFrame(byte[] data){ // pack = [data]+crc32([data]) byte[] hash = new byte[8]; // calculate crc32 of data and copy it to byte array CRC32 crc = new CRC32(); crc.update(data); hash=toByta(crc.getValue()); // create a byte array holding the final packet byte[] pack = new byte[data.length+hash.length]; // create the corrupted data byte[] crpt = new byte[data.length]; crpt = corruptData(data); // copy corrupted data into pack System.arraycopy(crpt, 0, pack, 0, crpt.length); // copy hash into pack System.arraycopy(hash, 0, pack, data.length, hash.length); // return pack return pack; } /** * Verifies frame contents. * @param frame The frame data (data+crc32). * @return True if frame is valid, false otherwise. */ public static boolean verifyFrame(byte[] frame){ // allocate hash and data variables byte[] hash=new byte[8]; byte[] data=new byte[frame.length-hash.length]; // read frame into hash and data variables System.arraycopy(frame, frame.length-hash.length, hash, 0, hash.length); System.arraycopy(frame, 0, data, 0, frame.length-hash.length); // get crc32 of data CRC32 crc = new CRC32(); crc.update(data); // compare crc32 of data with crc32 of frame return crc.getValue()==toLong(hash); } /** * Transfers a file through a channel in frames and reconstructs it into a new file. * @param jpg_file File name of target file to transfer. * @param err_file The channel noise file used to simulate corruption. * @param out_file The name of the newly-created file. * @throws IOException */ public static void transferFile(String jpg_file, String err_file, String out_file) throws IOException { // read file data into global variables jpg_data = readFileArray(new File(jpg_file)); err_data = readFileArray(new File(err_file)); err_pstn = 0; // variable that will hold the final (transfered) data byte[] out_data = new byte[jpg_data.length]; // holds the current frame data byte[] frame_orig = new byte[100]; byte[] frame_sent = new byte[100]; // send file in chunks (frames) of 100 bytes for(int i=0; i<Math.ceil(jpg_data.length/100); i++){ // copy jpg data into frame and init first-time switch System.arraycopy(jpg_data, i*100, frame_orig, 0, 100); boolean not_first=false; System.out.print("Packet #"+i+": "); // repeat getting same frame until frame crc matches with frame content do { if(not_first)System.out.print("F"); frame_sent=buildFrame(frame_orig); not_first=true; }while(!verifyFrame(frame_sent)); // usually, you'd constrain this by time to prevent infinite loops (in // case the channel is so wacked up it doesn't get a single packet right) // copy frame to image file System.out.println("S"); System.arraycopy(frame_sent, 0, out_data, i*100, 100); } System.out.println("\nDone."); writeFileArray(new File(out_file),out_data); } // global variables for file data and pointer public static byte[] jpg_data; public static byte[] err_data; public static int err_pstn=0; public static void main(String[] args) throws IOException { // list of jpg files String[] jpg_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo1.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo2.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo3.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo4.jpg" }; // list of error patterns String[] err_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 1.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 2.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 3.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 4.DAT" }; // loop through all jpg/channel combinations and run tests for(int x=0; x<jpg_file.length; x++){ for(int y=0; y<err_file.length; y++){ System.out.println("Transfering photo"+(x+1)+".jpg using Pattern "+(y+1)+"..."); transferFile(jpg_file[x],err_file[y],jpg_file[x].replace("photo","CH#"+y+"_photo")); } } } }

    Read the article

  • WCF configuration file: why do we need clientBaseAddress in Binding section?

    - by Captain Comic
    Hi, There are three sections in WCF configuration for service client: Look at bindings = clientBaseAddress Why do we need to specify callback address? Is this field required? Why .NET is unable to determine the address of client? Does it mean that i can specify callback service that is located on some other machine? <configuration> <system.serviceModel> <client> <endpoint address= </client> <bindings> <wsDualHttpBinding> <binding name= clientBaseAddress= maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" </binding> </wsDualHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name=> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel>

    Read the article

  • Silverlight, WCF service, integrated security AND ssl/https not possible?

    - by Flores
    I have this setup that works perfectly when using http. A silverlight 3 client .net 4 WCF service hosted in IIS with basicHttpBinding and using integrated security on the site When setting https to required on the website the setup stops working. Using the wcftestclient on the uri I get the message: The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Negotiate,NTLM'. The remote server returned an error: (401) Unauthorized. Maybe this makes sense because the wcftestclient does not pass credentials? in the web.config the security mode for the service binding is set is set to 'Transport'. The silverlight client is created like this: BasicHttpBinding basicHttpBinding = new BasicHttpBinding(); basicHttpBinding.Security.Mode = BasicHttpSecurityMode.Transport; var serviceClient = new ImportServiceClient(basicHttpBinding, serviceAddress); The service address is ofcourse starting with https:// And the silverlight client reports this error: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via Remember, swithing it back to http (and setting security mode to 'TransportCredentialOnly' makes everything working again. Is the setup I want even supported? If so, how should it be configured?

    Read the article

  • singleton pattern in Windows Activation Service

    - by Joshua
    Hello I have a few WCF services that are currently being self hosted, in a very basic NT Service. I want to expand my application to add provisioning of WCF Services, and updates, as well as isolation (I want each WCF Service to be in its own AppDomain). These WCF Services contain logic that needs to be run on a regular basis, pinging the database, and getting information from external devices so that when a request comes in the data is readily available. I'm thinking about trying out Windows Activation Service, because i really like the provisioning, and isolation that comes with a managed services infrastructure. If I didn't use WAS I would essentially have to write the same code myself. From what I understand though WAS does not really support the model of having a service that is running before someone actually calls a method on the service. the article I read here MSDN Article Link states "That means in essence that out-of-the-box WAS hosting is not something that is really suited for sessionful or singleton services. It is more suitable for stateless per-call services." it does say that "Out of the box" so I'm wondering if anyone has used WAS to host a WCF service that really behaves more like an NT Service (starting and stopping independantly of having a method called upon it). Or any other ideas would be great. I was planning on writting this infrastructure myself, to host WCF services in a custom ServiceHost, and put their execution in a seporate AppDomain, as well as allow for provision of these services after initial installation, along with updates. However, I would MUCH MUCH MUCH rather not own that code if I don't have to. thanks Joshua

    Read the article

  • How can I use nested Async (WCF) calls within foreach loops in Silverlight ?

    - by Peter St Angelo
    The following code contains a few nested async calls within some foreach loops. I know the silverlight/wcf calls are called asyncrously -but how can I ensure that my wcfPhotographers, wcfCategories and wcfCategories objects are ready before the foreach loop start? I'm sure I am going about this all the wrong way -and would appreciate an help you could give. private void PopulateControl() { List<CustomPhotographer> PhotographerList = new List<CustomPhotographer>(); proxy.GetPhotographerNamesCompleted += proxy_GetPhotographerNamesCompleted; proxy.GetPhotographerNamesAsync(); //for each photographer foreach (var eachPhotographer in wcfPhotographers) { CustomPhotographer thisPhotographer = new CustomPhotographer(); thisPhotographer.PhotographerName = eachPhotographer.ContactName; thisPhotographer.PhotographerId = eachPhotographer.PhotographerID; thisPhotographer.Categories = new List<CustomCategory>(); proxy.GetCategoryNamesFilteredByPhotographerCompleted += proxy_GetCategoryNamesFilteredByPhotographerCompleted; proxy.GetCategoryNamesFilteredByPhotographerAsync(thisPhotographer.PhotographerId); // for each category foreach (var eachCatergory in wcfCategories) { CustomCategory thisCategory = new CustomCategory(); thisCategory.CategoryName = eachCatergory.CategoryName; thisCategory.CategoryId = eachCatergory.CategoryID; thisCategory.SubCategories = new List<CustomSubCategory>(); proxy.GetSubCategoryNamesFilteredByCategoryCompleted += proxy_GetSubCategoryNamesFilteredByCategoryCompleted; proxy.GetSubCategoryNamesFilteredByCategoryAsync(thisPhotographer.PhotographerId,thisCategory.CategoryId); // for each subcategory foreach(var eachSubCatergory in wcfSubCategories) { CustomSubCategory thisSubCatergory = new CustomSubCategory(); thisSubCatergory.SubCategoryName = eachSubCatergory.SubCategoryName; thisSubCatergory.SubCategoryId = eachSubCatergory.SubCategoryID; } thisPhotographer.Categories.Add(thisCategory); } PhotographerList.Add(thisPhotographer); } PhotographerNames.ItemsSource = PhotographerList; } void proxy_GetPhotographerNamesCompleted(object sender, GetPhotographerNamesCompletedEventArgs e) { wcfPhotographers = e.Result.ToList(); } void proxy_GetCategoryNamesFilteredByPhotographerCompleted(object sender, GetCategoryNamesFilteredByPhotographerCompletedEventArgs e) { wcfCategories = e.Result.ToList(); } void proxy_GetSubCategoryNamesFilteredByCategoryCompleted(object sender, GetSubCategoryNamesFilteredByCategoryCompletedEventArgs e) { wcfSubCategories = e.Result.ToList(); }

    Read the article

  • How to restart a wcf server from within a client?

    - by djerry
    Hey guys, I'm using Wcf for server - client communication. The server will need to run as a service, so there's no GUI. The admin can change settings using the client program and for those changes to be made on server, it needs to restart. This is my server setup NetTcpBinding binding = new NetTcpBinding(SecurityMode.Message); Uri address = new Uri("net.tcp://localhost:8000"); //_svc = new ServiceHost(typeof(MonitoringSystemService), address); _monSysService = new MonitoringSystemService(); _svc = new ServiceHost(_monSysService, address); publishMetaData(_svc, "http://localhost:8001"); _svc.AddServiceEndpoint(typeof(IMonitoringSystemService), binding, "Monitoring Server"); _svc.Open(); MonitoringSystemService is a class i'm using to handle client - server comm. It looks like this: [CallbackBehavior(ConcurrencyMode = ConcurrencyMode.Reentrant)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, MaxItemsInObjectGraph = 2147483647)] public class MonitoringSystemService : IMonitoringSystemService {} So i need to call a restart method on the client to the server, but i don't know how to restart (even stop - start) the server. I hope i'm not missing any vital information. Thanks in advance.

    Read the article

  • Can a stateless WCF service benefit from built-in database connection pooling?

    - by vladimir
    I understand that a typical .NET application that accesses a(n SQL Server) database doesn't have to do anything in particular in order to benefit from the connection pooling. Even if an application repeatedly opens and closes database connections, they do get pooled by the framework (assuming that things such as credentials do not change from call to call). My usage scenario seems to be a bit different. When my service gets instantiated, it opens a database connection once, does some work, closes the connection and returns the result. Then it gets torn down by the WCF, and the next incoming call creates a new instance of the service. In other words, my service gets instantiated per client call, as in [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]. The service accesses an SQL Server 2008 database. I'm using .NET framework 3.5 SP1. Does the connection pooling still work in this scenario, or I need to roll my own connection pool in form of a singleton or by some other means (IInstanceContextProvider?). I would rather avoid reinventing the wheel, if possible.

    Read the article

  • How to call a service operation at a REST style WCF endpoint uri?

    - by Dieter Domanski
    Hi, is it possible to call a service operation at a wcf endpoint uri with a self hosted service? I want to call some default service operation when the client enters the endpoint uri of the service. In the following sample these uris correctly call the declared operations (SayHello, SayHi): - http://localhost:4711/clerk/hello - http://localhost:4711/clerk/hi But the uri - http://localhost:4711/clerk does not call the declared SayWelcome operation. Instead it leads to the well known 'Metadata publishing disabled' page. Enabling mex does not help, in this case the mex page is shown at the endpoint uri. private void StartSampleServiceHost() { ServiceHost serviceHost = new ServiceHost(typeof(Clerk), new Uri( "http://localhost:4711/clerk/")); ServiceEndpoint endpoint = serviceHost.AddServiceEndpoint(typeof(IClerk), new WebHttpBinding(), ""); endpoint.Behaviors.Add(new WebHttpBehavior()); serviceHost.Open(); } [ServiceContract] public interface IClerk { [OperationContract, WebGet(UriTemplate = "")] Stream SayWelcome(); [OperationContract, WebGet(UriTemplate = "/hello/")] Stream SayHello(); [OperationContract, WebGet(UriTemplate = "/hi/")] Stream SayHi(); } public class Clerk : IClerk { public Stream SayWelcome() { return Say("welcome"); } public Stream SayHello() { return Say("hello"); } public Stream SayHi() { return Say("hi"); } private Stream Say(string what) { string page = @"<html><body>" + what + "</body></html>"; return new MemoryStream(Encoding.UTF8.GetBytes(page)); } } Is there any way to disable the mex handling and to enable a declared operation instead? Thanks in advance, Dieter

    Read the article

  • How to detect a WCF session crash before calling a contract method?

    - by brain_pusher
    I am using a session mode for my WCF service. The problem is the following: if session is broken and no longer exists, client can't know it before calling a contract. For example, if the service has been restarted, the client's session id is invalid, because that session has been closed on the server side. I check the channel state before calling the contract and its value is CommunicationState.Opened even if session is already broken. So, when I call the contract after this check I get a CommunicationException with this message: The remote endpoint no longer recognizes this sequence. This is most likely due to an abort on the remote endpoint. The value of wsrm:Identifier is not a known Sequence identifier. The reliable session was faulted. Is there any workaround? I need a way to get an appropriate session state before calling a contract so that I can restore it without getting an exception. P.S. The CommunicationException type is general, so I can't detect a session crash by catching this exception. P.P.S. I have asked the similar question here, but in that case I didn't know the reason, now I don't know how to evade it.

    Read the article

  • Visual Studio 2010 randomly unable to debug WCF service.

    - by rossisdead
    I'm running Visual Studio 2010 on a Windows 7 x64 machine, and occasionally VS is giving me the good old "The remote procedure could not be debugged.This usually indicates that debugging has not been enabled on the server" error that a lot of people ask about. My problem, though, is that it seems to only do this randomly(it can be anywhere from a few minutes to a few hours), and after I've made plenty of successful calls to the service already. It doesn't prevent the service from working. It still returns values and doesn't throw any errors. The only difference is that annoying dialog pops up everytime I start to debug my application. I should mention that I'm connecting the WCF service from a WPF application. If I launch the web site the service is part of, I don't get the dialog. A few of the things I've tried that do not work: Killing and restarting the server. Compiling the web server in x86 Enabling tracing, but couldn't find any problems. Is this just a bug in Visual Studio 2010, or is there something I'm missing?

    Read the article

  • Best way to catch a WCF exception in Silverlight?

    - by wahrhaft
    I have a Silverlight 2 application that is consuming a WCF service. As such, it uses asynchronous callbacks for all the calls to the methods of the service. If the service is not running, or it crashes, or the network goes down, etc before or during one of these calls, an exception is generated as you would expect. The problem is, I don't know how to catch this exception. Because it is an asynchronous call, I can't wrap my begin call with a try/catch block and have it pick up an exception that happens after the program has moved on from that point. Because the service proxy is automatically generated, I can't put a try/catch block on each and every generated function that calls EndInvoke (where the exception actually shows up). These generated functions are also surrounded by External Code in the call stack, so there's nowhere else in the stack to put a try/catch either. I can't put the try/catch in my callback functions, because the exception occurs before they would get called. There is an Application_UnhandledException function in my App.xaml.cs, which captures all unhandled exceptions. I could use this, but it seems like a messy way to do it. I'd rather reserve this function for the truly unexpected errors (aka bugs) and not end up with code in this function for every circumstance I'd like to deal with in a specific way. Am I missing an obvious solution? Or am I stuck using Application_UnhandledException? [Edit] As mentioned below, the Error property is exactly what I was looking for. What is throwing me for a loop is that the fact that the exception is thrown and appears to be uncaught, yet execution is able to continue. It triggers the Application_UnhandledException event and causes VS2008 to break execution, but continuing in the debugger allows execution to continue. It's not really a problem, it just seems odd.

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >