Search Results

Search found 16311 results on 653 pages for 'environment variables'.

Page 331/653 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Silverlight Cream for November 25, 2011 -- #1174

    - by Dave Campbell
    In this Issue: Michael Collier, Samidip Basu, Jesse Liberty, Dhananjay Kumar, and Michael Crump. Above the Fold: WP7: "31 Days of Mango | Day #16: Isolated Storage Explorer" Samidip Basu Metro/WinRT/W8: "1360x768x32 Resolution in Windows 8 in VirtualBox" Michael Crump Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up Alex Golesh releases a Silverlight 5-friendly version of his external map manifest file tool: Utility: Extmap Maker v1.1From SilverlightCream.com:31 Days of Mango | Day #17: Using Windows AzureMichael Collier has Jeff Blankenburg's Day 17 and is talking about Azure services for your Phone apps... great discussion on this... good diagrams, code, and entire project to download31 Days of Mango | Day #16: Isolated Storage ExplorerSamidip Basu has Jeff Blankenburg's 31 Days for Day 16, and is discussing ISO, and the Isolated Storage Explorer which helps peruse ISO either in the emulator or on your deviceTest Driven Development–Testing Private ValuesJesse Liberty's got a post up discussing TDD in his latest Full Stack excerpt wherein he and Jon Galloway are building a Pomodoro timer app. He has a solution for dealing with private member variables and is looking for feedbackVideo on How to work with System Tray Progress Indicator in Windows Phone 7Dhananjay Kumar's latest video tutorial is up... covering working with the System Tray Progress Indicator in WP7, as the title says :)1360x768x32 Resolution in Windows 8 in VirtualBoxMichael Crump is using a non-standard resolution with Win8 preview and demosntrates how to make that all work with VirtualBoxMichaelStay in the 'Light!Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCreamJoin me @ SilverlightCream | Phoenix Silverlight User GroupTechnorati Tags:Silverlight    Silverlight 3    Silverlight 4    Windows PhoneMIX10

    Read the article

  • What is the most time-effective way to monitor & manage threats from bots and/or humans?

    - by CheeseConQueso
    I'm usually overwhelmed by the amount of tools that hosting companies provide to track & quantify traffic data and statistics. I'm equally overwhelmed by the countless flavors of malicious 'attacks' that target any and every web site known to man. The security methods used to protect both the back and front end of a website are documented well and are straight-forward in terms of ease of implementation and application, but the army of autonomous bots knows no boundaries and will always find a niche of a website to infest. So what can be done to handle the inevitable swarm of bots that pound your domain with brute force? Whenever I look at error logs for my domains, there are always thousands of entries that look like bots trying to sneak sql code into the database by tricking the variables in the url into giving them schema information or private data within the database. My barbaric and time-consuming plan of defense is just to monitor visitor statistics for those obvious patterns of abuse and either ban the ips or range of ips accordingly. Aside from that, I don't know much else I could do to prevent all of the ping pong going on all day. Are there any good tools that automatically monitor this background activity (specifically activity that throws errors on the web & db server) and proactively deal with these source(s) of mayhem?

    Read the article

  • Output = MAXDOP 1

    - by Dave Ballantyne
    It is widely know that data modifications on table variables do not support parallelism, Peter Larsson has a good example of that here .  Whilst tracking down a performance issue,  I saw that using the OUTPUT clause also causes parallelism to not be used. By way of example,  first lets create two tables with a simple parent and child (one to one) relationship, and then populate them with 100,000 rows. Drop table ParentDrop table Childgocreate table Parent(id integer identity Primary Key,data1 char(255))Create Table Child(id integer Primary Key)goinsert into Parent(data1)Select top 1000000 NULL from sys.columns a cross join sys.columns b insert into ChildSelect id from Parentgo If we then execute update Parent set data1 =''from Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1 We should see an execution plan using parallelism such as   However,  if the OUTPUT clause is now used update Parent set data1 =''output inserted.idfrom Parentjoin Child on Parent.Id = Child.Id where Parent.Id %100 =1 and Child.id %100 =1   The execution plan shows that Parallelism was not used Make of that what you will, but i thought that this was a pretty unexpected outcome. Update : Laurence Hoff has mailed me to note that when the OUTPUT results are captured to a temporary table using the INTO clause,  then parallelism is used.  Naturally if you use a table variable then there is still no parallelism  

    Read the article

  • Designing javascript chart library

    - by coolscitist
    I started coding a chart library on top of d3js: My chart library. I read Javascript API reusability and Towards reusable charts. However, I am NOT really following the suggestions because I am not really convinced about them. This is how my library can be used to create a bubble chart: var chart = new XYBubbleChart(); chart.data = [{"xValue":200,"yValue":300},{"xValue":400,"yValue":200},{"xValue":100,"yValue":310}]; //set data chart.dataKey.x = "xValue"; chart.dataKey.y = "yValue"; chart.elementId = "#chart"; chart.createChart(); Here are my questions: It does not use chaining. Is it a big issue? Every property and function is exposed publicly. (Example: width, height are exposed in Chart.js). OOP is all about abstraction and hiding, but I don't really see the point right now. I think exposing everything gives flexibility to change property and functionality inside subclasses and objects without writing a lot of code. What could be pitfalls of such exposure? I have implemented functions like: zooming, "showing info boxes when data point is clicked" as "abilities". (example: XYZoomingAbility.js). Basically, such "abilities" accept "chart" object, play around with public variables of "chart" to add functionality. What this allows me to do is to add an ability by writing: activateZoomAbility(chartObject); My goal is to separate "visualization" from "interactivity". I want "interactivity" like: zooming to be plugged into the chart rather than built inside the chart. Like, I don't want my bubble chart to know anything about "zooming". However, I do want zoomable bubble chart. What is the best way to do this? How to test and what to test? I have written mixed tests: jasmine and actual html files so that I can test manually on browser.

    Read the article

  • MySQL Policy-Based Auditing Webinar Recording Now Availabile

    - by Rob Young
    For those who missed the live event, the recording of the "How to Add Policy-Based Auditing to your MySQL Applications" webinar is now available.  You can view it here. This presentation builds on my earlier blog post on MySQL Enterprise Audit that was announced at MySQL Connect in late September.  The web presentation expands on the introductory blog and covers: The regulatory problem to be solved (internal audit, PCI, Sarbanes-Oxley, HIPAA, others) MySQL Audit solutions for both Community and Enterprise users: General Log - use the basic features of the MySQL server MySQL 5.5 open audit API - or use your time and talent to build your own solution MySQL Enterprise Audit - or use the out of the box, ready for production solution from MySQL Simple, step-by-step process for installing, enabling and configuring the MySQL Enterprise Audit plugin for use with existing apps New variables and options for tuning the MySQL Enterprise Audit plugin for your specific use case Best practices for securing and managing audit log files and archived images Roadmap for adding an integrated solution around MySQL Enterprise Audit for MySQL only and Oracle/MySQL shops You can learn all the technical details on MySQL Enterprise Audit in the MySQL docs and learn all about MySQL Enterprise Edition and Auditing here. As always, thanks for your support of MySQL!

    Read the article

  • Scaling background without scaling foreground in platformer?

    - by David Xu
    I'm currently developing a platform game and I've run into a problem with scaling resolutions. I want a different resolution of the game to still display the foreground unscaled (characters, tiles, etc) but I want the background to be scaled to fit into the window. To explain this better, my viewport has 4 variables: (x, y, width, height) where x and y are the top left corner and width and height are the dimensions. These can be either 800x600, 1024x768 or 1280x960. When I design my levels, I design everything for the highest resolution (1280x960) and expect the game engine to scale it down if a user is running in a lower resolution. I have tried the following to make it work but nothing I've come up with solves it so far: scale = view->width/1280; drawX = x * scale; drawY = y * scale; (this makes the translation too small for low resolution) and scale = view->width/1280; bgWidth = background->width*scale; bgHeight = background->height*scale; drawX = x + background->width/2 - bgWidth/2; drawY = y + background->height/2 - bgHeight/2; (this makes the translation completely wrong at the edges of the map) The thing is, no matter what resolution the game is run at, the map remains the same size, and the foreground is unscaled. (With a lower resolution you just see less of the foreground in the viewport) I was wondering if anyone had any idea how to solve this problem? Thank you in advance!

    Read the article

  • What do you call an obfuscator that isn't an obfuscator?

    - by Alex.Davies
    SmartAssembly, formerly {smartassembly}, version 5 is now available as an Early Access Build. You can get it here: http://www.red-gate.com/MessageBoard/viewforum.php?f=116 We're having second thoughts about the name change though. It isn't that we like the curly brackets, far from it. The trouble is that the first rule of product naming is to name a product by what it does. SmartAssembly may make an assembly smarter, but that's not something people really google for. The trouble is, I can't think of a better name for it. That's because SmartAssembly really does two completely separate things: Obfuscates Sets up your assembly for the awesome exception reports which get sent to you whenever your application crashes. You may have been (un?)lucky enough to see one in reflector if you use it. This is what those exception reports look like when they arrive back with the developer: Look at all those local variables! If you ask me, this is much cooler than the obfuscation. So obviously we don't want to call it just "Red Gate Obfuscator" or something, because it doesn't do justice to the exception reporting. What would you call it?

    Read the article

  • Using ASP.NET C# and Javascript

    - by ctck
    I'm looking for the most efficient / standardized way of passing data between client javascript code and C# code behind in an ASP.NET application. Currently ive been using the following methods to achieve this but they all feel a bit like a fudge. The way i pass data from javascript to the C# code behind is by setting hidden asp variables and triggering a postback <asp:HiddenField ID="RandomList" runat="server" /> function SetDataField(data) { document.getElementById('<%=RandomList.ClientID%>').value = data; } Then in C# code i collect the list protected void GetData(object sender, EventArgs e) { var _list = RandomList.value; } Going back the other way i often use either scriptmanager to register a function and pass it data during Page_Load: ScriptManager.RegisterStartupScript(this.GetType(), "Set","get("Test();",true); or i add attributes to controls before a post back or during Initialization / pre rendering stages: Btn.Attributes.Add("onclick", "DisplayMessage("Hello");"); These methods have served me well and do the job. However they just dont feel complete. Is there a more standardized way of passing data between client side markup / javascript and backend code. Ive seen some posts like this one: Injecting JavaScrip : StackOverflow that describe HtmlElement class. Is this something is should look into? Thanks everyone for your time.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Why do you hate Java? Is it the language or the framework? [closed]

    - by zneak
    According to you all, Java is the third most-hated language here. The two other most hated languages are PHP and VBScript. (It's quite funny how they stand together on the podium.) I'd like to make it known that the question mostly addresses people who don't like Java. I assume here a number of subjective opinions as facts because they're usually considered true among people who don't like Java, and I don't want to be convinced otherwise here. If you're a Java enthusiast, you might find this question frustrating. It's never been made clear if people hate Java itself, or if they hate it because of the framework, or if it's a mixture of the two. On a side you have the language, where you have: the "everything should be an object" philosophy, even in instances where it should obviously be something else (event handlers I'm pointing you); checked exceptions; the idea that all logic should be presented as methods and properties is a big no-no; the fact that "closures" created by anonymous types only include final variables and arguments, but will allow write access to any member of the parent class; a few more. On the other side, you have the JDK, with... its load of inconsistencies and overengineering; monolithic class hierarchies; meaningless base exceptions like IOException (though other frameworks have similar exception hierarchies); sluggish responsiveness even with Swing; a few more. My question is, do you think that, if either one (Java or the JDK) was taken alone, and the other was dropped in favor of something else, the new combination would be better? For instance, if you could use the C# syntax with the JDK (adapting get*/set* methods into properties, and interfaces with only one method into delegates), or the Java syntax with the .NET Framework (doing the inverse transformations), would things get better in your opinion?

    Read the article

  • Teach Your Kid to Code (&hellip;and Vote early!)

    - by Steve Michelotti
    Next Tuesday I will be at the CMAP main meeting presenting Teach Your Kid to Code. Next Tuesday is of course Election Day so you have to make sure you vote early in order to get over to CMAP for the 7:00PM presentation. I will be co-presenting this talk with my 5th grade son. Here is the abstract: Have you ever wanted a way to teach your kid to code? For that matter, have you ever wanted to simply be able to explain to your kid what you do for a living? Putting things in a context that a kid can understand is not as easy as it sounds. If you are someone curious about these concepts, this is a “can’t miss” presentation that will be co-presented by Justin Michelotti (5th grader) and his father. Bring your kid with you to CMAP for this fun and educational session. We will show tools you may not have been aware of like SmallBasic and Kodu – we’ll even throw in a little Visual Studio and Windows 8! Concepts such as variables, conditionals, loops, and functions will be covered while we introduce object oriented concepts without any of the confusing words. Kids are not required for entry! I promise this will be an entertaining presentation! We hope to see you (and your kids) there. Click here for details.

    Read the article

  • One page using querystring or many folders and pages?

    - by ClarkeyBoy
    I have an application where I have the 'core' code in one folder for which there is a virtual directory in the root, such that I can include any core files using /myApp/core/bla.asp. I then have two folders outside of this with a default.asp which currently use the querystring to define what page should be displayed. One page is for general users, the other will only be accessible to users who have permission to manage users / usergroups / permissions. The core code checks the querystring and then checks the permissions for that user. An example of this as it is now is default.asp?action=view&viewtype=list&objectid=server. I am not worried about SEO as this is an internal app and uses Windows Auth. My question is, is it better the way it is now or would it be better to have something like the following: /server/view/list/ /server/view/?id=123 /server/create/ /server/edit/?id=123 /server/remove/?id=123 In the above folders I would have a home page which defines all the variables which are currently determined by the querystring - in /server/create/ for example, I would define the action as 'create', object name as 'server' and so on. In terms of future development, I really have no idea which method would be best. I think the 2nd method would be best in terms of following what page does what but this is such a huge change to make at this stage that I would really like some opinions, preferably based on experience. PS Sorry if the tags are wrong - I am new to this forum and thought this was a bit too much of a discussion for StackOverflow as that is very much right / wrong answer based. I got the idea SE is more discussion based.

    Read the article

  • Is there a name for a testing method where you compare a set of very different designs?

    - by DVK
    "A/B testing" is defined as "a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test samples in order to improve response rates". The point here, of course, is to know which small single-variable changes are more optimal, with the goal of finding the local optimum. However, one can also envision a somewhat related but different scenario for testing the response rate of major re-designs: take a baseline control design, take one or more completely different designs, and run test samples on those redesigns to compare response rates. As a practical but contrived example, imagine testing a set of designs for the same website, one being minimalist "googly" design, one being cluttered "Amazony" design, and one being an artsy "designy" design (e.g. maximum use of design elements unlike Google but minimal simultaneously presented information, like Google but unlike Amazon) Is there an official name for such testing? It's definitely not A/B testing, since the main component of it (finding local optimum by testing single-variable small changes that can be attributed to response shift) is not present. This is more about trying to compare a set of local optimums, and compare to see which one works better as a global optimum. It's not a multivriable, A/B/N or any other such testing since you don't really have specific variables that can be attributed, just different designs.

    Read the article

  • Transient VO : Powerful J2EE Design Pattern

    - by Vijay Mohan
    We had a use-case wherein, the communication has to happen between regions residing under differenet taskfows. Essentially, they had a common set of parameters to be used. Initially, we resorted to the  use of pageFlowScope variables, but they are tightly coupled with the individual task flows. So, how the communication has to happen..?Some of the alternatives that we brainstormed into are - 1.usage of adf contextual event - This is a powerful feature indeed for such use-cases, but there is a considerable cost involved with it. So, before resorting to it, you have to make sure that you have good enough reason to use it.It actually does a server roundtrip and also the issue of an event and listening part to it is also something which requires your attention !!2.Use a transientVO with shared data control scope - with shared data control scope, the transient VO rows would be persistent across the task flows in your application. All you have to do is to create the attributes in the transientVO(prefereably with the same names - for the ease of conversion) and create some utility methods in VOImpl for creating row, updating row and deleting a row. You also have to make sure that the vo row is initialized per http request( this you can do in a bookmark method of your index.jspx - residing in adfc-config.xml), else the ui fields binded to the transient vo attributes won't render in UI.Hope, this helps and this should be a common use-case across apps.

    Read the article

  • kubuntu 12.10 will not boot on mac 2.93Ghz intel core 2 duo

    - by Jake Sweet
    I feel like I've tried it all and nothing is changing. I've tried booting from a liveUSB, a liveDVD, and I've checked the mod5 everything matches up. I've even tried different distro's same result on all of them. Just for reference: linuxmint 13kde and Fedora 17. I've also tried changing my liveUSB building software just in case. I've tried unetbootin and Linux USB builder. Both have same results, my opinion is that it is a hardware issue since I'm having near the same result with all of these variables. So now what is actually happening? I can boot up to a screen. I say A screen because some of the ways that dvd's and usb's boot differs. Now on liveusb I'm reaching a black screen with white text. Says booting: done, then below says loading ramdrive: done, then below that it says preparing to boot kernel this may take a while and buckle in or something to that effect. Then nothing. That's it computer freezes. I've waited up to 8 hrs and still nothing. Ok for the liveDVD Everything goes according to instructions per pdf files on every distro, until linux starts. I can only run in compatibility mode. When any other option is tried the computer seems to freeze/stall/be a pain in my butt... Ok well that seems to wrap it up. Also if I'm not explaining something well, I'm sorry I can try to clear anything up. I'm not the best at descriptions. I'm leaving with a tech specs of my mac: 2.93GHz Intel Core 2 Duo, 4 GB ram, NVIDIA GeForce GT 120 graphics, bought in late 09" it's the 24" model, let me know if anymore information will help. Also thanks in advance

    Read the article

  • Input handling between game loops

    - by user48023
    This may be obvious and trivial for you but as I am a newbie in programming I come with a specific question. I have three loops in my game engine which are input-loop, update-loop and render-loop. Update-loop is set to 10 ticks per second with a fixed timestep, render-loop is capped at around 60 fps and the input-loop runs as fast as possible. I am using one of the Javascript frameworks which provide such things but it doesn't really matter. Let's say I am rendering a tile map and the view of which elements are rendered depends on camera-like movement variables which are modified during key pressing. This is only about camera/viewport and rendering, no game physics involved here. And now, how can I handle input events among these loops to keep consistent engine reaction? Am I supposed to read the current variable modified with input and do some needed calculations in a update-loop and share the result so it could be interpolated in a render-loop? Or read the input effect directly inside the render-loop and put needed calculations inside? I thought interpreting user input inside an update-loop with a low tick rate would be inaccurate and kind of unresponsive while rendering with interpolation in the final view. How it is done properly in games overall?

    Read the article

  • Translation and Localization Resources for UX Designers

    - by ultan o'broin
    Here is a handy list of translation and localization-related resources for user experience professionals. Following these will help you design an easily translatable user experience. Most of the references here are for web pages or software. Fundamentally, remember your designs will be consumed globally, and never divorce the design process from the development or deployment effort that goes into bringing your designs to life in code. Ask yourself today: Do you know how the text you are using in your designs are delivered to the customer, even in English? Key areas that UX designers always seen to fall foul of, in my space anyway, are: Terminology that is impossible to translate (jargon, multiple modifiers, gerunds) or is used inconsistently Poorly written, verbose text (really, just write well in English, no special considerations) String construction (concatenation of parts assembled dynamically) Composite widget positioning (my favourite) Hard-coded fonts, small font sizes, or character formatting or casing that doesn't work globally Format that is not separate from content Restricted real estate not allowing for text expansion in translation Forcing formatting with breaks, and hard-coding alphabetical sorting Graphics that do not work in Bi-Di languages (because they indicate directionality and can't flip) or contain embedded text. The problems of culturally offensive icons are well known by now in the enterprise applications space, though there are some dangers, such as the use of flags to indicate language, for example. Resources Internationalization Techniques: Authoring HTML & CSS Global By Design Insert Title Here : Variables in Interface Language Prose: Internationalisation Doc and help considerations I can deal with later.

    Read the article

  • creating bounding box list

    - by Christian Frantz
    I'm trying to create a list of bounding boxes for each cube drawn, so I can use the boxes to intersect with a ray that my mouse position is casting, but I have no idea how. I've created a list that stores the boxes, but how am I getting the values from each box? for (int x = 0; x < mapHeight; x++) { for (int z = 0; z < mapWidth; z++) { cubes.Add(new Vector3(x, map[x, z], z), Matrix.Identity, grass); boxList.Add(something here); } } public Cube(GraphicsDevice graphicsDevice) { device = graphicsDevice; var vertices = new List<VertexPositionTexture>(); BuildFace(vertices, new Vector3(0, 0, 0), new Vector3(0, 1, 1)); BuildFace(vertices, new Vector3(0, 0, 1), new Vector3(1, 1, 1)); BuildFace(vertices, new Vector3(1, 0, 1), new Vector3(1, 1, 0)); BuildFace(vertices, new Vector3(1, 0, 0), new Vector3(0, 1, 0)); BuildFaceHorizontal(vertices, new Vector3(0, 1, 0), new Vector3(1, 1, 1)); BuildFaceHorizontal(vertices, new Vector3(0, 0, 1), new Vector3(1, 0, 0)); cubeVertexBuffer = new VertexBuffer(device, VertexPositionTexture.VertexDeclaration, vertices.Count, BufferUsage.WriteOnly); cubeVertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); } There aren't any clearly defined variables for the bounds of each cube created, so where do I create the bounding box from?

    Read the article

  • Affect movieclip scale from a .as doc to another

    - by Madcowe
    I've been working on a game following a tutorial on the internet, the game is an avoider where you have the Avatar, that has to avoid the objects that fall. The way it is made is: I have a DocumentClass which addChild's the screen you should be seeing and removeChild's the screen that you were. For example: first it loads the menuScreen, then when you press play unloads menu and loads playscreen. When you die it loads the gameoverScreen and loads the playscreen. And from the gameOverScreen you can press the SHOP button to go to the shop. From here on I'm on my own and not following any tutorials. The shop has a button that is supposed to alter the Avatar's X and Y scale to 0.5, but the problem is: how do I make that work? I tried creating a sharedObject.data.avatarSize, on the store's size button the code would be something like: sharedObject.data.avatarSize *= 0.5; And on the AvoiderGame.as, which is the most of the actual game, on the part where the avatar is created I tried putting this after it's creation: scaleX.avatar = sharedObject.data.avatarSize; scaleY.avatar = sharedObject.data.avatarSize; This did not work since it gives me the error 1009 saying can't access something that is null. I tried this before "using" the sharedObject: if( sharedObject.data.avatarSize == null ) { sharedObject.data.avatarSize = 1; } But it did not work... So now I'm not sure on what to do. I know we should reduce global variables as much as we can but how do I do it? Also, if it helps, I'm using Flash CS5 and working with AS3.0

    Read the article

  • Bad style programming, am I pretending too much?

    - by Luca
    I realized to work in an office with a quite bad code base. The base library implemented in years and years is quite limited, and most of that code is, honestly, horrible. Projects developed in the office are very large. Fine. I could define me a "perfectionist" (but often I'm not), and I thought to refactor an application (really a portion), which need a new (complex) feature. But, today, I really realized that it's not possible to refactor that application modules with a reasonable time (say, 24/26 hours, respect the avaialable time for the task, which is 160 hours). I'm talking about (I am a bit ashamed to say) name collisions, large and frequent cut & paste code, horrible and misleading naming, makefiles without dependencies (!), application login is spread randomly across many different sources, dead code, variable aliasing, no assertion, no documentation, very long source files, bad/incomplete include file definition, (this is emblematic!) very frequent extern declaration of variables and functions, ... I'm sure to continue ... buffer overflows because sprintf, indentation (!), spacing, non existent const modifier usage. I would say that every source line was written quite randomly when needed, without keeping in mind some design (at least, the obvious one). (Am I in hell?) The problem arises when the application is developed by a colleague of mine. I felt very frustrated. So, I decided to expose the "situation" to my colleague; at the end, that was a bad idea. He is justified in saying that "the application was developed in haste, so it is natural that it is written vaguely; you are wasting time to think and implement an elegant implementation" .... I'm asking too much from my colleague to write readable code, which is managed and documented? I expect too much in not having to read thousands of lines of code to understand how a particular logic?

    Read the article

  • After upgrading to 12.04 from 10.10 my mythbuntu standard MCEUSB remote no longer works

    - by keepitsimpleengineer
    I had no problems using my Windows Media Center Remote with 10.10 Mythbuntu, but after upgrading, it no longer affects Mythbuntu. I have verified and re-installed it in Mythbuntu Control Centre. I have used irw to verify the ir buttons actions are properly received by the HTPC. How do I go about fixing this? 3.2.0-26-generic (#41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012) Xorg version: 1.11.3 (16 July 2012 08:06:31PM) GCC: 4.6 (x86_64-linux-gnu) Current updates as of 2012?07?21 $cat /etc/lirc/hardware.con #Chosen Remote Control REMOTE="Windows Media Center Transceivers/Remotes (all)" REMOTE_MODULES="lirc_dev mceusb" REMOTE_DRIVER="" REMOTE_DEVICE="/dev/lirc0" REMOTE_SOCKET="" REMOTE_LIRCD_CONF="mceusb/lircd.conf.mceusb" REMOTE_LIRCD_ARGS="" #Chosen IR Transmitter TRANSMITTER="None" TRANSMITTER_MODULES="" TRANSMITTER_DRIVER="" TRANSMITTER_DEVICE="" TRANSMITTER_SOCKET="" TRANSMITTER_LIRCD_CONF="" TRANSMITTER_LIRCD_ARGS="" #Enable lircd START_LIRCD="true" #Don't start lircmd even if there seems to be a good config file #START_LIRCMD="false" #Try to load appropriate kernel modules LOAD_MODULES="true" # Default configuration files for your hardware if any LIRCMD_CONF="" #Forcing noninteractive reconfiguration #If lirc is to be reconfigured by an external application #that doesn't have a debconf frontend available, the noninteractive #frontend can be invoked and set to parse REMOTE and TRANSMITTER #It will then populate all other variables without any user input #If you would like to configure lirc via standard methods, be sure #to leave this set to "false" FORCE_NONINTERACTIVE_RECONFIGURATION="false" START_LIRCMD="" # lsusb | grep -i infrared Bus 003 Device 002: ID 0471:0815 Philips (or NXP) eHome Infrared Receiver

    Read the article

  • Why I don't use SSIS checkpoint files

    - by jamiet
    In a recent discussion in regard to general ETL best practises the subject of checkpoint files as a means for package restartability came up and I stated that I was dead against using them. For anyone that may care, here is why: Configuring them is distinctly unintuitive (that's a matter of opinion but if you follow the link I'll wager that you will agree) they don't make any allowance for loop iterations they cannot store variables of type Object they are limited in ability. There are many scenarios where you may want to execute certain containers regardless of whether the package is started from a checkpoint file but the current usage model does not allow for this. they are ignored by eventhandlers, which wouldn't be so bad if there were a way to toggle this behaviour in certain scenarios they dont work properly I'll expand on the last bullet point. I have encountered situations where the behaviour for tasks executing concurrently is unpredictable. That is, sometimes the completion of a task that executes concurrently with a failed/failing task will make it into the checkpoint file and sometimes it won't. This is near-impossible to reproduce but it does happen as my good friend John Welch will hopefully concur (if he is reading). Is anyone out there making successful use of checkpoint files within SSIS? I would be interested in knowing about that if so. @Jamiet

    Read the article

  • CodePlex Daily Summary for Wednesday, September 12, 2012

    CodePlex Daily Summary for Wednesday, September 12, 2012Popular ReleasesActive Forums for DotNetNuke CMS: Active Forums 05.00.00 RC2: Active Forums 05.00.00 RC2SSIS Compressed File Source and Destination Components: Compressed File Souce and Destination Components: Initial Beta ReleaseArduino for Visual Studio: Arduino 1.x for Visual Studio 2012, 2010 and 2008: Register for the visualmicro.com forum for more news and updates Version 1209.10 includes support for VS2012 and minor fixes for the Arduino debugger beta test team. Version 1208.19 is considered stable for visual studio 2010 and 2008. If you are upgrading from an older release of Visual Micro and encounter a problem then uninstall "Visual Micro for Arduino" using "Control Panel>Add and Remove Programs" and then run the install again. Key Features of 1209.10 Support for Visual Studio 2...Bookmark Collector: 01.01.00: This release has the follow new features and updates: Enhanced the ContentItem integration Changed the format of how ContentItem content is saved Implemented core JSON methods from the API Fully documented the source code Please Note: This module was originally written as a proof of concept for how to create a simple module using the Christoc module templates, and using the ContentItems API instead of a DAL. Minimum Requirements DotNetNuke v06.02.03 or newer .Net Framework v3.5 SP1...Microsoft Script Explorer for Windows PowerShell: Script Explorer Reference Implementation(s): This download contains Source Code and Documentation for Script Explorer DB Reference Implementation. You can create your own provider and use it in Script Explorer. Refer to the documentation for more information. The source code is provided "as is" without any warranty. Read the Readme.txt file in the SourceCode.Social Network Importer for NodeXL: SocialNetImporter(v.1.5): This new version includes: - Fixed the "resource limit" bug caused by Facebook - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuAcDown????? - AcDown Downloader Framework: AcDown????? v4.1: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??...Move Mouse: Move Mouse 2.5.2: FIXED - Minor fixes and improvements.MVC Controls Toolkit: Mvc Controls Toolkit 2.3: Added The new release is compatible with Mvc4 RTM. Support for handling Time Zones in dates. Specifically added helper methods to convert to UTC or local time all DateTimes contained in a model received by a controller, and helper methods to handle date only fileds. This together with a detailed documentation on how TimeZones are handled in all situations by the Asp.net Mvc framework, will contribute to mitigate the nightmare of dates and timezones. Multiple Templates, and more options to...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.00: Maintenance Release Changes on Metro7 06.02.00 Fixed width and height on the jQuery popup for the Editor. Navigation Provider changed to DDR menu Added menu files and scripts Changed skins to Doctype HTML Changed manifest to dnn6 manifest file Changed License to HTML view Fixed issue on Metro7/PinkTitle.ascx with double registering of the Actions Changed source folder structure and start folder, so the project works with the default DNN structure on developing Added VS 20...Xenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.9.0: Release Notes Imporved framework architecture Improved the framework security More import/export formats and operations New WebPortal application which includes forum, new, blog, catalog, etc. UIs Improved WebAdmin app. Reports, navigation and search Perfomance optimization Improve Xenta.Catalog domain More plugin interfaces and plugin implementations Refactoring Windows Azure support and much more... Package Guide Source Code - package contains the source code Binaries...Json.NET: Json.NET 4.5 Release 9: New feature - Added JsonValueConverter New feature - Set a property's DefaultValueHandling to Ignore when EmitDefaultValue from DataMemberAttribute is false Fix - Fixed DefaultValueHandling.Ignore not igoring default values of non-nullable properties Fix - Fixed DefaultValueHandling.Populate error with non-nullable properties Fix - Fixed error when writing JSON for a JProperty with no value Fix - Fixed error when calling ToList on empty JObjects and JArrays Fix - Fixed losing deci...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.66: Just going to bite the bullet and rip off the band-aid... SEMI-BREAKING CHANGE! Well, it's a BREAKING change to those who already adjusted their projects to use the previous breaking change's ill-conceived renamed DLLs (versions 4.61-4.65). For those who had not adapted and were still stuck in this-doesn't-work-please-fix-me mode, this is more like a fixing change. The previous breaking change just broke too many people, I'm sorry to say. Renaming the DLL from AjaxMin.dll to AjaxMinLibrary.dl...DotNetNuke® Community Edition CMS: 07.00.00 CTP (Not for Production Use): NOTE: New Minimum Requirementshttp://www.dotnetnuke.com/Portals/25/Blog/Files/1/3418/Windows-Live-Writer-1426fd8a58ef_902C-MinimumVersionSupport_2.png Simplified InstallerThe first thing you will notice is that the installer has been updated. Not only have we updated the look and feel, but we also simplified the overall install process. You shouldn’t have to click through a series of screens in order to just get your website running. With the 7.0 installer we have taken an approach that a...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.2: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BIDS Helper: BIDS Helper 1.6.1: In addition to fixing a number of bugs that beta testers reported, this release includes the following new features for Tabular models in SQL 2012: New Features: Tabular Display Folders Tabular Translations Editor Tabular Sync Descriptions Fixed Issues: Biml issues 32849 fixing bug in Tabular Actions Editor Form where you type in an invalid action name which is a reserved word like CON or which is a duplicate name to another action 32695 - fixing bug in SSAS Sync Descriptions whe...Code Snippets for Windows Store Apps: Code Snippets for Windows Store Apps: First release of our snippets! For more information: Installation List of SnippetsUmbraco CMS: Umbraco 4.9.0: Whats newThe media section has been overhauled to support HTML5 uploads, just drag and drop files in, even multiple files are supported on any HTML5 capable browser. The folder content overview is also much improved allowing you to filter it and perform common actions on your media items. The Rich Text Editor’s “Media” button now uses an embedder based on the open oEmbed standard (if you’re upgrading, enable the media button in the Rich Text Editor datatype settings and set TidyEditorConten...WordMat: WordMat v. 1.02: This version was used for the 2012 exam.menu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.New Projects[ITFA GROUP] CODE GENER: Code Gener is a tool to help programmers and system builders in building applications. ANPR MX: ANPR MX is a simple Automatic Plate Recognition (ANPR) library for the North American average plate size based on C# and OpenCV. BatteryStatus: show battery level on status bar on an android deviceCode Snippets for Windows Store Apps: Code Snippets for Windows Store apps is a collection of around 60 IntelliSense Code Snippets for Visual Basic, C#, C++, and JavaScript developers.Cube2d: cube2dDiscover_U_Server: Discover_U_ServerExeLauncher: Make PATH recursiveExpression Evaluator + aggregate functions support: Simple library for expressions evaluation that supports variables in expression and aggregative functions to parse and evaluate expression against tabular data.FancyGrid: A custom datagrid for WPF with support for real-time filtering, multisort, and themes. Compatible with MVVM and normal WPF binding.langben: ??????????????,?????????????,??,???????,?????,???????,?????????????????????????。 ???? •????????(SOA) •????????????????? •?????????????? •??IE 6、IE 8+?FirefMakeTracks Gadgeteer GPS Module Driver: This project is the driver for the Gadgeteer compatible MakeTracks GPS module developed by Eric Hall and other members of the tinyclr.com community.MyCloud: heheOVS: OVS est un projet d'analyse et de traitement de signaux Vidéo sur IP avec remontées d'informations sur consultables sur des terminaux mobilesPMS: Project Management System for HSUScenario4: testSharePoint 2010 Syntax Highlighting: This project allows users to apply syntax highlighting to code snippits via the SharePoint 2010 Ribbon UI.SharePoint CRM: CRM/Project Management Site template for both SharePoint 2010 Enterprise and Office 365 Enterprise tennantsSharePoint PowerShell Wizard: The SharePoint PowerShell Wizards provides a tool to help generate and support some of the PowerShell scripts needed to recreate aspects of your farm.Shindo's Race System: Shindo's Race System is a plugin for SA:MP Dedicated Server 0.3e.Test MVC application: Test project - please ignore!weber: Weber is a private browser, it tries to prevent the user from being tracked by the advertisers and traffic monitoring sites. Hence it is severely impaired. Try!xuebin: web site

    Read the article

  • Entity component system -> handling components that depend on one another

    - by jtedit
    I really like the idea of an entity component system and feel it has great flexibility, but have a question. How should dependent components be handled? I'm not talking about how components should communicate with other components they depend on, I have that sorted, but rather how to ensure components are present. For example, an entity cannot have a "velocity" component if it doesn't have a "position" component, in the same way it cant have an "acceleration" component if it doesn't have a "velocity" component. My first idea was every component class overrides an "onAddedToEntity(Entity ent)" function. Then in that function it checks that prerequisite components are also added to the entity, eg: struct EntCompVelocity() : public EntityComponent{ //member variables here void onAddedToEntity(Entity ent){ if(!ent.hasComponent(EntCompPosition::Id)){ ent.addComponent(new EntCompPosition()); } } } This has the nice property that if the acceleration component adds the velocity component, the velocity component will itself add the position component to the entity so dependency "trees" will sort themselves out. However my concern is if I do this components will silently be added with default values and, in the example of adding position, many entities will appear at the origin. Another idea was to simple have the "Entity.addComponent();" function return false if the component's prerequisite components aren't already on the entity, this would force you to manually add the position component and set its value before adding the velocity component. Finally I could simply not ensure a components prerequisite components are added, the "UpdatePosition" system only deals with entities with both a position and velocity component, so therefore adding a velocity component without having a position component wont be a problem (it wont cause crashes due to null pointer/etc), but it does mean entities will carry useless unused data if you add components but not their prerequisite components. Does anyone have experience with this problem and/or any of these methods to solve it? How did you solve the problem?

    Read the article

  • Implementing a post-notification function to perform custom validation

    - by Alejandro Sosa
    Introduction Oracle Workflow Notification System can be extended to perform extra validation or processing via PLSQL procedures when the notification is being responded to. These PLSQL procedures are called post-notification functions since they are executed after a notification action such as Approve, Reject, Reassign or Request Information is performed. The standard signature for the post-notification function is     procedure <procedure_name> (itemtype  in varchar2,                                itemkey   in varchar2,                                actid     in varchar2,                                funcmode  in varchar2,                                resultout in out nocopy varchar2); Modes The post-notification function provides the parameter 'funcmode' which will have the following values: 'RESPOND', 'VALIDATE, and 'RUN' for a notification is responded to (Approve, Reject, etc) 'FORWARD' for a notification being forwarded to another user 'TRANSFER' for a notification being transferred to another user 'QUESTION' for a request of more information from one user to another 'QUESTION' for a response to a request of more information 'TIMEOUT' for a timed-out notification 'CANCEL' when the notification is being re-executed in a loop. Context Variables Oracle Workflow provides different context information that corresponds to the current notification being acted upon to the post-notification function. WF_ENGINE.context_nid - The notification ID  WF_ENGINE.context_new_role - The new role to which the action on the notification is directed WF_ENGINE.context_user_comment - Comments appended to the notification   WF_ENGINE.context_user - The user who is responsible for taking the action that updated the notification's state WF_ENGINE.context_recipient_role - The role currently designated as the recipient of the notification. This value may be the same as the value of WF_ENGINE.context_user variable, or it may be a group role of which the context user is a member. WF_ENGINE.context_original_recipient - The role that has ownership of and responsibility for the notification. This value may differ from the value of the WF_ENGINE.context_recipient_role variable if the notification has previously been reassigned.  Example Let us assume there is an EBS transaction that can only be approved by a certain people thus any attempt to transfer or delegate such notification should be allowed only to users SPIERSON or CBAKER. The way to implement this functionality would be as follows: Edit the corresponding workflow definition in Workflow Builder and open the notification. In the Function Name enter the name of the procedure where the custom code is handled, for instance, TEST_PACKAGE.Post_Notification In PLSQL create the corresponding package TEST_PACKAGE with a procedure named Post_Notification, as follows:     procedure Post_Notification (itemtype  in varchar2,                                  itemkey   in varchar2,                                  actid     in varchar2,                                  funcmode  in varchar2,                                  resultout in out nocopy varchar2) is     l_count number;     begin       if funcmode in ('TRANSFER','FORWARD') then         select count(1) into l_count         from WF_ROLES         where WF_ENGINE.context_new_role in ('SPIERSON','CBAKER');               --and/or any other conditions         if l_count<1 then           WF_CORE.TOKEN('ROLE', WF_ENGINE.context_new_role);           WF_CORE.RAISE('WFNTF_TRANSFER_FAIL');         end if;       end if;     end Post_Notification; Launch the workflow process with the changed notification and attempt to reassign or transfer it. When trying to reassign the notification to user CBROWN the screen would like like below: Check the Workflow API Reference Guide, section Post-Notification Functions, to see all the standard, seeded WF_ENGINE variables available for extending notifications processing. 

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >