Search Results

Search found 12101 results on 485 pages for 'objective c runtime'.

Page 6/485 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Setting useLegacyV2RuntimeActivationPolicy At Runtime

    - by Reed
    Version 4.0 of the .NET Framework included a new CLR which is almost entirely backwards compatible with the 2.0 version of the CLR.  However, by default, mixed-mode assemblies targeting .NET 3.5sp1 and earlier will fail to load in a .NET 4 application.  Fixing this requires setting useLegacyV2RuntimeActivationPolicy in your app.Config for the application.  While there are many good reasons for this decision, there are times when this is extremely frustrating, especially when writing a library.  As such, there are (rare) times when it would be beneficial to set this in code, at runtime, as well as verify that it’s running correctly prior to receiving a FileLoadException. Typically, loading a pre-.NET 4 mixed mode assembly is handled simply by changing your app.Config file, and including the relevant attribute in the startup element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> </configuration> .csharpcode { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { background-color: #ffffff; font-family: consolas, "Courier New", courier, monospace; color: black; font-size: small } .csharpcode pre { margin: 0em } .csharpcode .rem { color: #008000 } .csharpcode .kwrd { color: #0000ff } .csharpcode .str { color: #006080 } .csharpcode .op { color: #0000c0 } .csharpcode .preproc { color: #cc6633 } .csharpcode .asp { background-color: #ffff00 } .csharpcode .html { color: #800000 } .csharpcode .attr { color: #ff0000 } .csharpcode .alt { background-color: #f4f4f4; margin: 0em; width: 100% } .csharpcode .lnum { color: #606060 } This causes your application to run correctly, and load the older, mixed-mode assembly without issues. For full details on what’s happening here and why, I recommend reading Mark Miller’s detailed explanation of this attribute and the reasoning behind it. Before I show any code, let me say: I strongly recommend using the official approach of using app.config to set this policy. That being said, there are (rare) times when, for one reason or another, changing the application configuration file is less than ideal. While this is the supported approach to handling this issue, the CLR Hosting API includes a means of setting this programmatically via the ICLRRuntimeInfo interface.  Normally, this is used if you’re hosting the CLR in a native application in order to set this, at runtime, prior to loading the assemblies.  However, the F# Samples include a nice trick showing how to load this API and bind this policy, at runtime.  This was required in order to host the Managed DirectX API, which is built against an older version of the CLR. This is fairly easy to port to C#.  Instead of a direct port, I also added a little addition – by trapping the COM exception received if unable to bind (which will occur if the 2.0 CLR is already bound), I also allow a runtime check of whether this property was setup properly: public static class RuntimePolicyHelper { public static bool LegacyV2RuntimeEnabledSuccessfully { get; private set; } static RuntimePolicyHelper() { ICLRRuntimeInfo clrRuntimeInfo = (ICLRRuntimeInfo)RuntimeEnvironment.GetRuntimeInterfaceAsObject( Guid.Empty, typeof(ICLRRuntimeInfo).GUID); try { clrRuntimeInfo.BindAsLegacyV2Runtime(); LegacyV2RuntimeEnabledSuccessfully = true; } catch (COMException) { // This occurs with an HRESULT meaning // "A different runtime was already bound to the legacy CLR version 2 activation policy." LegacyV2RuntimeEnabledSuccessfully = false; } } [ComImport] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] [Guid("BD39D1D2-BA2F-486A-89B0-B4B0CB466891")] private interface ICLRRuntimeInfo { void xGetVersionString(); void xGetRuntimeDirectory(); void xIsLoaded(); void xIsLoadable(); void xLoadErrorString(); void xLoadLibrary(); void xGetProcAddress(); void xGetInterface(); void xSetDefaultStartupFlags(); void xGetDefaultStartupFlags(); [MethodImpl(MethodImplOptions.InternalCall, MethodCodeType = MethodCodeType.Runtime)] void BindAsLegacyV2Runtime(); } } Using this, it’s possible to not only set this at runtime, but also verify, prior to loading your mixed mode assembly, whether this will succeed. In my case, this was quite useful – I am working on a library purely for internal use which uses a numerical package that is supplied with both a completely managed as well as a native solver.  The native solver uses a CLR 2 mixed-mode assembly, but is dramatically faster than the pure managed approach.  By checking RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully at runtime, I can decide whether to enable the native solver, and only do so if I successfully bound this policy. There are some tricks required here – To enable this sort of fallback behavior, you must make these checks in a type that doesn’t cause the mixed mode assembly to be loaded.  In my case, this forced me to encapsulate the library I was using entirely in a separate class, perform the check, then pass through the required calls to that class.  Otherwise, the library will load before the hosting process gets enabled, which in turn will fail. This code will also, of course, try to enable the runtime policy before the first time you use this class – which typically means just before the first time you check the boolean value.  As a result, checking this early on in the application is more likely to allow it to work. Finally, if you’re using a library, this has to be called prior to the 2.0 CLR loading.  This will cause it to fail if you try to use it to enable this policy in a plugin for most third party applications that don’t have their app.config setup properly, as they will likely have already loaded the 2.0 runtime. As an example, take a simple audio player.  The code below shows how this can be used to properly, at runtime, only use the “native” API if this will succeed, and fallback (or raise a nicer exception) if this will fail: public class AudioPlayer { private IAudioEngine audioEngine; public AudioPlayer() { if (RuntimePolicyHelper.LegacyV2RuntimeEnabledSuccessfully) { // This will load a CLR 2 mixed mode assembly this.audioEngine = new AudioEngineNative(); } else { this.audioEngine = new AudioEngineManaged(); } } public void Play(string filename) { this.audioEngine.Play(filename); } } Now – the warning: This approach works, but I would be very hesitant to use it in public facing production code, especially for anything other than initializing your own application.  While this should work in a library, using it has a very nasty side effect: you change the runtime policy of the executing application in a way that is very hidden and non-obvious.

    Read the article

  • Using runtime checking of code contracts in Visual Studio 2010

    - by DigiMortal
    In my last posting about code contracts I introduced how to check input parameters of randomizer using static contracts checking. But you can also compile code contracts to your assemblies and use them also in runtime. In this posting I will show you simple example about runtime checking of code contracts. NB! If you want to play with code and try out things described here feel free to download example solution. if you are speaker and want to use this solution as a part of your sessions then feel free to do so, but don’t forget to refer me and this blog as source of this solution. And please let me know about your session. As a speaker I am very interested about it. :) To see how code contracts are checked at runtime we have to enable runtime checking from project properties. Make sure you have checked the box “Perform Runtime Contract Checking” and make sure you select “Full” from dropdown. These parts are in red box on the screenshot below. Visual Studio 2010 settings for code contracts. Runtime Checking is turned on and checks are made only in public surface. Click on image to see it at original size.  Save project settings. Then compile code and run it. As soon as code execution hits the call to GetRandomFromRangeContracted() exception is thrown. If you are not currently playing with solution referred above take a look at the following screenshot. Visual Studio 2010 runtime checking of code contracts. Exception of type ContractException is thrown when contract is violated. Click on image to see it at original size.  The exact type of exception is ContractException and it is defined in System.Diagnostics.Contracts.__ContractsRuntime namespace. In our example the message of exception is following: "Precondition failed: min < max  Min must be less than max" Besides the description we inserted for the case contract violation the message also contains violated contract type. In this case the type of contract is Precondition. Conclusion Using runtime checking of code contracts enables you to take code contracts with your code and have them checked every time when your methods are called. This way you can assure that all conditions are met to run method or exception is thrown and calling system has to handle the situation.

    Read the article

  • Size already defined

    - by John Smith
    I was messing with my Objective-C++ namespace today. I found that Handle, Size and Duration are already defined in ObjC++. What are they defined to be and where are they defined? I have only #imported Foundation/Foundation.h

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Unable to find standard libraries when compiling Objective-C using GNUstep on Windows

    - by Jason Roberts
    I just installed GNUstep on my Windows XP machine and I'm attempting to compile the following Objective-C Hello World program from the command line: #import <Foundation/Foundation.h> int main(int argc, const char *argv[]) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSLog(@"Hello world\n"); [pool drain]; return 0; } When I try to compile the program from the command line like so gcc hello.m -o hello I end up getting the following error hello.m:1:34: Foundation/Foundation.h: No such file or directory Is there something I need to do order to inform the compiler of where the standard Objective-C libraries are located?

    Read the article

  • Objective-C: how to prevent abstraction leaks

    - by iter
    I gather that in Objective-C I must declare instance variables as part of the interface of my class even if these variables are implementation details and have private access. In "subjective" C, I can declare a variable in my .c file and it is not visible outside of that compilation unit. I can declare it in the corresponding .h file, and then anyone who links in that compilation unit can see the variable. I wonder if there is an equivalent choice in Objective-C, or if I must indeed declare every ivar in the .h for my class. Ari.

    Read the article

  • Handler invocation speed: Objective-C vs virtual functions

    - by Kerido
    I heard that calling a handler (delegate, etc.) in Objective-C can be even faster than calling a virtual function in C++. Is it really correct? If so, how can that be? AFAIK, virtual functions are not that slow to call. At least, this is my understanding of what happens when a virtual function is called: Compute the index of the function pointer location in vtbl. Obtain the pointer to vtbl. Dereference the pointer and obtain the beginning of the array of function pointers. Offset (in pointer scale) the beginning of the array with the index value obtained on step 1. Issue a call instruction. Unfortunately, I don't know Objective-C so it's hard for me to compare performance. But at least, the mechanism of a virtual function call doesn't look that slow, right? How can something other than static function call be faster?

    Read the article

  • Objective-C Pointer to class that implements a protocol

    - by Winder
    I have three classes which implement the same protocol, and have the same parent class which doesn't implement the protocol. Normally I would have the protocol as pure virtual functions in the parent class but I couldn't find an Objective-C way to do that. How can I utilize polymorphism on these subclasses and call the functions implemented in the protocol without warnings? Some pseudocode if that didn't make sense: @interface superclass: NSObject {} @interface child1: superclass<MyProtocol> {} @interface child2: superclass<MyProtocol> {} The consumer of these classes: @class child1 @class child2 @class superclass @interface SomeViewController: UIViewController { child1 *oneView; child2 *otherView; superclass *currentView; } -(void) someMethod { [currentView protocolFunction]; } The only nice way I've found to do pure virtual functions in Objective-C is a hack by putting [self doesNotRecognizeSelector:_cmd]; in the parent class, but it isn't ideal.

    Read the article

  • Adding C++ Object to Objective-C Class

    - by Winder
    I'm trying to mix C++ and Objective-C, I've made it most of the way but would like to have a single interface class between the Objective-C and C++ code. Therefore I would like to have a persistent C++ object in the ViewController interface. This fails by forbidding the declaration of 'myCppFile' with no type: #import <UIKit/UIKit.h> #import "GLView.h" #import "myCppFile.h" @interface GLViewController : UIViewController <GLViewDelegate> { myCppFile cppobject; } @end However this works just fine in the .mm implementation file (It doesn't work because I want cppobject to persist between calls) #import "myCppFile.h" @implementation GLViewController - (void)drawView:(UIView *)theView { myCppFile cppobject; cppobject.draw(); }

    Read the article

  • Wishlist for Objective-C IDE/Xcode ?

    - by Null Pointer
    My company is evaluating the possibility of developing set of tools for Objective-C extending Xcode default functionality(basically we are thinking about providing better navigation,semantic search, more refactorings, quick fixes, improved code completion (visual assist inspired). So we would like to ask XCode/Objective-C developers: Do you feel that you are missing some features in XCode? What is your wish list? Are you considering the possibility of using addons to Xcode which are not created by Apple? Would you be willing to pay for these addons, or would you only consider free solutions?

    Read the article

  • Changing database structure at runtime with Entity Framework?

    - by Teddy
    Hi. I have to write a solution that uses different databases with different structure from the same code. So, when a user logs to the application I determine to which database he/she is connected to at runtime. The user can create tables and columns at any time and they have to see the change on the fly. The reason that I use one and the same code the information is manipulates the same way for the different databases. How can I accomplish this at runtime? Actually is the Entity Framework a good solution for my problem? Thanks in advance.

    Read the article

  • Debugging Objective C JNI code

    - by thatidiotguy
    Here is the situation: I have a client's java project open in eclipse. It uses a JNI library created by an Xcode Objective C project. Is there any good way for me to debug the C code from eclipse when I execute the Java code? Obviously eclipse's default debugger cannot step into the jni library file and we lose the thread (thread meaning investigative thread here, not programming thread). Any advice or input is appreciated as the code base is large enough that following the client's code will be radically faster than other options. Thanks. EDIT: It should be noted that the reason that the jni library is written in Objective-C is because it is integrating with Mac OSX. It is using the Cocoa framework to integrate with the Apple speech api.

    Read the article

  • double checked locking - objective c

    - by bandejapaisa
    I realised double checked locking is flawed in java due to the memory model, but that is usually associated with the singleton pattern and optimizing the creation of the singleton. What about under this case in objective-c: I have a boolean flag to determine if my application is streaming data or not. I have 3 methods, startStreaming, stopStreaming, streamingDataReceived and i protect them from multiple threads using: - (void) streamingDataReceived:(StreamingData *)streamingData { if (self.isStreaming) { @synchronized(self) { if (self.isStreaming) { - (void) stopStreaming { if (self.isStreaming) { @synchronized(self) { if (self.isStreaming) { - (void) startStreaming:(NSArray *)watchlistInstrumentData { if (!self.isStreaming) { @synchronized(self) { if (!self.isStreaming) { Is this double check uneccessary? Does the double check have similar problems in objective-c as in java? What are the alternatives to this pattern (anti-pattern). Thanks

    Read the article

  • Use C++ with Objective-C in XCode

    - by prosseek
    I want to use/reuse C++ object with Objective-C. I have a hello.h that has the class definition, and hello.cpp for class implementation. class Hello { int getX() ... }; And I use this class in Objective-C function. #include "hello.h" ... - (IBAction) adderTwo:(id)sender { Hello *hi = new Hello(); int value = hi->getX(); NSLog(@"Hello %d", value); [textField setIntValue:value]; When I compile the code in Xcode, I get this error message. class Hello *XXXXX Users/smcho/Desktop/cocoa/adderTwo/hello.h:9:0 /Users/smcho/Desktop/cocoa/adderTwo/hello.h:9: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'Hello' Q: What went wrong? Am I missing something?

    Read the article

  • XCode 3.2 - can't create Objective-C class

    - by Urizen
    My installation of XCode 3.2 has begun to exhibit some strange behaviour. When I try to add a basic Objective-C classs (inheriting from NSObject), in my iPhone project, I get the following popup: "Alert. Couldn't load the next page". The above-mentioned error happens at the point of trying to create the file (i.e. the creation process doesn't get to the stage where you are asked to input the file name). It might also be worth mentioning that where there previously used to be a description for the file (in the file creation window) it now states "No description available". I can create other files (e.g. UIViewController subclass etc.) but not the basic Objective-C class. I have updated the documentation and restarted the machine and xcode several times without success. I would be extremely grateful for any assistance as I am relatively new to the XCode environment.

    Read the article

  • how to extract only the 1st 2 bytes of NSString in Objective-C for iPhone programming

    - by suse
    Hello, 1) How to read the data from read stream in Objective-C, Below code would give me how many bytes are read from stream, but how to know what data is read from stream? CFIndex cf = CFReadStreameRead(Stream, buffer, length); 2) How to extract only the 1st 2bytes of NSString in Objective-C in iPhone programming. For Ex: If this is the string NSString *str = 017MacApp; 1st byte has 0 in it, and 2nd byte has 17 in it. how do i extract o and 17 into byte array? I know that the below code would give me back the byte array to int value. ((b[0] & 0xFF) << 8)+ (b[1] & 0xFF); but how to put 0 into b[0] and 17 into b[1]? plz help me to solve this.

    Read the article

  • Frustrated with Objective-c code...

    - by Moshe
    Well, I've started with iPod/iPhone programming using Head First iPhone Development (O'reilly) and I'm typing code out of the book. There are two problems, one is programming related and the other is not. I don't understand the format of objective-c methods. I'm getting an few errors now, based on source code from the book. Which leads me to my next issue. Some of the code is buggy. I think so because I couldn't get the code to run without modifying it. The book has some typos in the text since it's a first edition and whatnot, but could my "fixing" the code have to do with it? So... Where can I learn more about objective-c methods and how they work in terms of structure and where the return type and arguments go? For those with the book, I'm in the middle of the InstaTweet app towards the beginning. Thanks.

    Read the article

  • Importing Swift classes within a Objective-C Framework

    - by theMonster
    I have a custom Framework that has a bunch of Objective-C Classes. Within the Framework, I'd like to add more classes using Swift. However, when trying to expose the Swift classes to the Objective-C code using: MyProduct-Swift.h, it comes up as "MyProduct-Swift.h file not found". I've tried this in a single view template and it works fine. Is it not possible to import Swift within a framework? I've also verified that I have set the Defines Module setting and the Module Name. I've tried it with and without these settings.

    Read the article

  • Sorting an array of Objective-c objects

    - by davbryn
    So I have a custom class Foo that has a number of members: @interface Foo : NSObject { NSString *title; BOOL taken; NSDate *dateCreated; } And in another class I have an NSMutableArray containing a list of these objects. I would very much like to sort this array based on the dateCreated property; I understand I could write my own sorter for this (iterate the array and rearrange based on the date) but I was wondering if there was a proper Objective-C way of achieving this? Some sort of sorting mechanism where I can provide the member variable to sort by would be great. In C++ I used to overload the < = operators and this allowed me to sort by object, but I have a funny feeling Objective-C might offer a nicer alternative? Many thanks

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >