Search Results

Search found 30912 results on 1237 pages for 'load path'.

Page 328/1237 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Connecting tomcat6 to apache2

    - by StudentKen
    Disclaimier: Not a server admin I've been scratching my head over this for weeks now (not consistently mind you, as that would be maddening). I've been trying to connect my apache2 server to my tomcat server to the point where if someone encounters *.jsp or any servelet in navigating my web directory, it's handed over to tomcat. I have both Apache2.0 (port 9099) and Tomcat6 (9089) running on Debian lenny on the same box. Currently, mod_jk is enabled with mod_jk.conf in $apacheHOME/mods-enabled/ with content: # Where to find workers.properties JkWorkersFile /etc/apache2/workers.properties # Where to put jk shared memory JkShmFile /var/log/at_jk/mod_jk.shm # Where to put jk logs JkLogFile /var/log/at_jk/mod_jk.log # Set the jk log level [debug/error/info] JkLogLevel info # Select the timestamp log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " # Send servlet for context /examples to worker named worker1 JkMount /*/servlet/* worker1 # Send JSPs for context /examples to worker named worker1 JkMount /*.jsp worker1 my workers.properties located in $apacheHOME/ with content: workers.tomcat_home=/var/lib/tomcat6 workers.java_home=/usr/lib/jdk1.6.0_23/db/ worker.list=worker1 ps=/ worker.worker1.port=9071 worker.worker1.host=localhost worker.worker1.type=ajp13 my web.xml in $tomcatHOME/conf has the following servlets enabled <servlet> <servlet-name>default</servlet-name> <servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-cla$ <init-param> <param-name>debug</param-name> <param-value>0</param-value> </init-param> <init-param> <param-name>listings</param-name> <param-value>false</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>jsp</servlet-name> <servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class> <init-param> <param-name>fork</param-name> <param-value>false</param-value> </init-param> <init-param> <param-name>xpoweredBy</param-name> <param-value>false</param-value> </init-param> <load-on-startup>3</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jsp</servlet-name> <url-pattern>*.jsp</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config> From what I can tell, there's no funny buisness as both the apache2, tomcat, and mod_jk logs show green; yet whenever I navigate to a jsp, it simply displays the javascript. I'm unsure what the problem is exactly despite pouring over the logs and documentation for aid. I'm quite a greenhorn in the servelet world.

    Read the article

  • Creating a new instance, C#

    - by Dave Voyles
    This sounds like a very n00b question, but bear with me here: I'm trying to access the position of my bat (paddle) in my pong game and use it in my ball class. I'm doing this because I want a particle effect to go off at the point of contact where the ball hits the bat. Each time the ball hits the bat, I receive an error stating that I haven't created an instance of the bat. I understand that I have to (or can use a static class), but I'm not sure of how to do so in this example. I've included both my Bat and Ball classes. namespace Pong { #region Using Statements using System; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; #endregion public class Ball { #region Fields private readonly Random rand; private readonly Texture2D texture; private readonly SoundEffect warp; private double direction; private bool isVisible; private float moveSpeed; private Vector2 position; private Vector2 resetPos; private Rectangle size; private float speed; private bool isResetting; private bool collided; private Vector2 oldPos; private ParticleEngine particleEngine; private ContentManager contentManager; private SpriteBatch spriteBatch; private bool hasHitBat; private AIBat aiBat; private Bat bat; #endregion #region Constructors and Destructors /// <summary> /// Constructor for the ball /// </summary> public Ball(ContentManager contentManager, Vector2 ScreenSize) { moveSpeed = 15f; speed = 0; texture = contentManager.Load<Texture2D>(@"gfx/balls/redBall"); direction = 0; size = new Rectangle(0, 0, texture.Width, texture.Height); resetPos = new Vector2(ScreenSize.X / 2, ScreenSize.Y / 2); position = resetPos; rand = new Random(); isVisible = true; hasHitBat = false; // Everything to do with particles List<Texture2D> textures = new List<Texture2D>(); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/circle")); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/star")); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/diamond")); particleEngine = new ParticleEngine(textures, new Vector2()); } #endregion #region Public Methods and Operators /// <summary> /// Checks for the collision between the bat and the ball. Sends ball in the appropriate /// direction /// </summary> public void BatHit(int block) { if (direction > Math.PI * 1.5f || direction < Math.PI * 0.5f) { hasHitBat = true; particleEngine.EmitterLocation = new Vector2(aiBat.Position.X, aiBat.Position.Y); switch (block) { case 1: direction = MathHelper.ToRadians(200); break; case 2: direction = MathHelper.ToRadians(195); break; case 3: direction = MathHelper.ToRadians(180); break; case 4: direction = MathHelper.ToRadians(180); break; case 5: direction = MathHelper.ToRadians(165); break; } } else { hasHitBat = true; particleEngine.EmitterLocation = new Vector2(bat.Position.X, bat.Position.Y); switch (block) { case 1: direction = MathHelper.ToRadians(310); break; case 2: direction = MathHelper.ToRadians(345); break; case 3: direction = MathHelper.ToRadians(0); break; case 4: direction = MathHelper.ToRadians(15); break; case 5: direction = MathHelper.ToRadians(50); break; } } if (rand.Next(2) == 0) { direction += MathHelper.ToRadians(rand.Next(3)); } else { direction -= MathHelper.ToRadians(rand.Next(3)); } AudioManager.Instance.PlaySoundEffect("hit"); } /// <summary> /// JEP - added method to slow down ball after powerup deactivates /// </summary> public void DecreaseSpeed() { moveSpeed -= 0.6f; } /// <summary> /// Draws the ball on the screen /// </summary> public void Draw(SpriteBatch spriteBatch) { if (isVisible) { spriteBatch.Begin(); spriteBatch.Draw(texture, size, Color.White); spriteBatch.End(); // Draws sprites for particles when contact is made particleEngine.Draw(spriteBatch); } } /// <summary> /// Checks for the current direction of the ball /// </summary> public double GetDirection() { return direction; } /// <summary> /// Checks for the current position of the ball /// </summary> public Vector2 GetPosition() { return position; } /// <summary> /// Checks for the current size of the ball (for the powerups) /// </summary> public Rectangle GetSize() { return size; } /// <summary> /// Grows the size of the ball when the GrowBall powerup is used. /// </summary> public void GrowBall() { size = new Rectangle(0, 0, texture.Width * 2, texture.Height * 2); } /// <summary> /// Was used to increased the speed of the ball after each point is scored. /// No longer used, but am considering implementing again. /// </summary> public void IncreaseSpeed() { moveSpeed += 0.6f; } /// <summary> /// Check for the ball to return normal size after the Powerup has expired /// </summary> public void NormalBallSize() { size = new Rectangle(0, 0, texture.Width, texture.Height); } /// <summary> /// Check for the ball to return normal speed after the Powerup has expired /// </summary> public void NormalSpeed() { moveSpeed += 15f; } /// <summary> /// Checks to see if ball went out of bounds, and triggers warp sfx /// </summary> public void OutOfBounds() { // Checks if the player is still alive or not if (isResetting) { AudioManager.Instance.PlaySoundEffect("warp"); { // Used to stop the the issue where the ball hit sfx kept going off when detecting collison isResetting = false; AudioManager.Instance.Dispose(); } } } /// <summary> /// Speed for the ball when Speedball powerup is activated /// </summary> public void PowerupSpeed() { moveSpeed += 20.0f; } /// <summary> /// Check for where to reset the ball after each point is scored /// </summary> public void Reset(bool left) { if (left) { direction = 0; } else { direction = Math.PI; } // Used to stop the the issue where the ball hit sfx kept going off when detecting collison isResetting = true; position = resetPos; // Resets the ball to the center of the screen isVisible = true; speed = 15f; // Returns the ball back to the default speed, in case the speedBall was active if (rand.Next(2) == 0) { direction += MathHelper.ToRadians(rand.Next(30)); } else { direction -= MathHelper.ToRadians(rand.Next(30)); } } /// <summary> /// Shrinks the ball when the ShrinkBall powerup is activated /// </summary> public void ShrinkBall() { size = new Rectangle(0, 0, texture.Width / 2, texture.Height / 2); } /// <summary> /// Stops the ball each time it is reset. Ex: Between points / rounds /// </summary> public void Stop() { isVisible = true; speed = 0; } /// <summary> /// Updates position of the ball /// </summary> public void UpdatePosition() { size.X = (int)position.X; size.Y = (int)position.Y; oldPos.X = position.X; oldPos.Y = position.Y; position.X += speed * (float)Math.Cos(direction); position.Y += speed * (float)Math.Sin(direction); bool collided = CheckWallHit(); particleEngine.Update(); // Stops the issue where ball was oscillating on the ceiling or floor if (collided) { position.X = oldPos.X + speed * (float)Math.Cos(direction); position.Y = oldPos.Y + speed * (float)Math.Sin(direction); } } #endregion #region Methods /// <summary> /// Checks for collision with the ceiling or floor. 2*Math.pi = 360 degrees /// </summary> private bool CheckWallHit() { while (direction > 2 * Math.PI) { direction -= 2 * Math.PI; return true; } while (direction < 0) { direction += 2 * Math.PI; return true; } if (position.Y <= 0 || (position.Y > resetPos.Y * 2 - size.Height)) { direction = 2 * Math.PI - direction; return true; } return true; } #endregion } } namespace Pong { using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using System; public class Bat { public Vector2 Position; public float moveSpeed; public Rectangle size; private int points; private int yHeight; private Texture2D leftBat; public float turbo; public float recharge; public float interval; public bool isTurbo; /// <summary> /// Constructor for the bat /// </summary> public Bat(ContentManager contentManager, Vector2 screenSize, bool side) { moveSpeed = 7f; turbo = 15f; recharge = 100f; points = 0; interval = 5f; leftBat = contentManager.Load<Texture2D>(@"gfx/bats/batGrey"); size = new Rectangle(0, 0, leftBat.Width, leftBat.Height); // True means left bat, false means right bat. if (side) Position = new Vector2(30, screenSize.Y / 2 - size.Height / 2); else Position = new Vector2(screenSize.X - 30, screenSize.Y / 2 - size.Height / 2); yHeight = (int)screenSize.Y; } public void IncreaseSpeed() { moveSpeed += .5f; } /// <summary> /// The speed of the bat when Turbo is activated /// </summary> public void Turbo() { moveSpeed += 8.0f; } /// <summary> /// Returns the speed of the bat back to normal after Turbo is deactivated /// </summary> public void DisableTurbo() { moveSpeed = 7.0f; isTurbo = false; } /// <summary> /// Returns the bat to the nrmal size after the Grow/Shrink powerup has expired /// </summary> public void NormalSize() { size = new Rectangle(0, 0, leftBat.Width, leftBat.Height); } /// <summary> /// Checks for the size of the bat /// </summary> public Rectangle GetSize() { return size; } /// <summary> /// Adds point to the player or the AI after scoring. Currently Disabled. /// </summary> public void IncrementPoints() { points++; } /// <summary> /// Checks for the number of points at the moment /// </summary> public int GetPoints() { return points; } /// <summary> /// Sets thedefault starting position for the bats /// </summary> /// <param name="position"></param> public void SetPosition(Vector2 position) { if (position.Y < 0) { position.Y = 0; } if (position.Y > yHeight - size.Height) { position.Y = yHeight - size.Height; } this.Position = position; } /// <summary> /// Checks for the current position of the bat /// </summary> public Vector2 GetPosition() { return Position; } /// <summary> /// Controls the bat moving up the screen /// </summary> public void MoveUp() { SetPosition(Position + new Vector2(0, -moveSpeed)); } /// <summary> /// Controls the bat moving down the screen /// </summary> public void MoveDown() { SetPosition(Position + new Vector2(0, moveSpeed)); } /// <summary> /// Updates the position of the AI bat, in order to track the ball /// </summary> /// <param name="ball"></param> public virtual void UpdatePosition(Ball ball) { size.X = (int)Position.X; size.Y = (int)Position.Y; } /// <summary> /// Resets the bat to the center location after a new game starts /// </summary> public void ResetPosition() { SetPosition(new Vector2(GetPosition().X, yHeight / 2 - size.Height)); } /// <summary> /// Used for the Growbat powerup /// </summary> public void GrowBat() { // Doubles the size of the bat collision size = new Rectangle(0, 0, leftBat.Width * 2, leftBat.Height * 2); } /// <summary> /// Used for the Shrinkbat powerup /// </summary> public void ShrinkBat() { // 1/2 the size of the bat collision size = new Rectangle(0, 0, leftBat.Width / 2, leftBat.Height / 2); } /// <summary> /// Draws the bats /// </summary> public virtual void Draw(SpriteBatch batch) { batch.Draw(leftBat, size, Color.White); } } }

    Read the article

  • Working with Tile Notifications in Windows 8 Store Apps – Part I

    - by dwahlin
    One of the features that really makes Windows 8 apps stand out from others is the tile functionality on the start screen. While icons allow a user to start an application, tiles provide a more engaging way to engage the user and draw them into an application. Examples of “live” tiles on part of my current start screen are shown next: I’ll admit that if you get enough of these tiles going the start screen can actually be a bit distracting. Fortunately, a user can easily disable a live tile by right-clicking on it or pressing and holding a tile on a touch device and then selecting Turn live tile off from the AppBar: The can also make a wide tile smaller (into a square tile) or make a square tile bigger assuming the application supports both squares and rectangles. In this post I’ll walk through how to add tile notification functionality into an application. Both XAML/C# and HTML/JavaScript apps support live tiles and I’ll show the code for both options.   Understanding Tile Templates The first thing you need to know if you want to add custom tile functionality (live tiles) into your application is that there is a collection of tile templates available out-of-the-box. Each tile template has XML associated with it that you need to load, update with your custom data, and then feed into a tile update manager. By doing that you can control what shows in your app’s tile on the Windows 8 start screen. So how do you learn more about the different tile templates and their respective XML? Fortunately, Microsoft has a nice documentation page in the Windows 8 Store SDK. Visit http://msdn.microsoft.com/en-us/library/windows/apps/hh761491.aspx to see a complete list of square and wide/rectangular tile templates that you can use. Looking through the templates you’ll It has the following XML template associated with it:  <tile> <visual> <binding template="TileSquareBlock"> <text id="1">Text Field 1</text> <text id="2">Text Field 2</text> </binding> </visual> </tile> An example of a wide/rectangular tile template is shown next:    <tile> <visual> <binding template="TileWideImageAndText01"> <image id="1" src="image1.png" alt="alt text"/> <text id="1">Text Field 1</text> </binding> </visual> </tile>   To use these tile templates (or others you find interesting), update their content, and get them to show for your app’s tile on the Windows 8 start screen you’ll need to perform the following steps: Define the tile template to use in your app Load the tile template’s XML into memory Modify the children of the <binding> tag Feed the modified tile XML into a new TileNotification instance Feed the TileNotification instance into the Update() method of the TileUpdateManager In the remainder of the post I’ll walk through each of the steps listed above to provide wide and square tile notifications for an application. The wide tile that’s shown will show an image and text while the square tile will only show text. If you’re going to provide custom tile notifications it’s recommended that you provide wide and square tiles since users can switch between the two of them directly on the start screen. Note: When working with tile notifications it’s possible to manipulate and update a tile’s XML template without having to know XML parsing techniques. This can be accomplished using some C# notification extension classes that are available. In this post I’m going to focus on working with tile notifications using an XML parser so that the focus is on the steps required to add notifications to the Windows 8 start screen rather than on external extension classes. You can access the extension classes in the Windows 8 samples gallery if you’re interested.   Steps to Create Custom App Tile Notifications   Step 1: Define the tile template to use in your app Although you can cut-and-paste a tile template’s XML directly into your C# or HTML/JavaScript Windows store app and then parse it using an XML parser, it’s easier to use the built-in TileTemplateType enumeration from the Windows.UI.Notifications namespace. It provides direct access to the XML for the various templates so once you locate a template you like in the documentation (mentioned above), simplify reference it:HTML/JavaScript var notifications = Windows.UI.Notifications; var template = notifications.TileTemplateType.tileWideImageAndText01; .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C# var template = TileTemplateType.TileWideImageAndText01;   Step 2: Load the tile template’s XML into memory Once the target template’s XML is identified, load it into memory using the TileUpdateManager’s GetTemplateContent() method. This method parses the template XML and returns an XmlDocument object:   HTML/JavaScript   var tileXml = notifications.TileUpdateManager.getTemplateContent(template); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#  var tileXml = TileUpdateManager.GetTemplateContent(template);   Step 3: Modify the children of the <binding> tag Once the XML for a given template is loaded into memory you need to locate the appropriate <image> and/or <text> elements in the XML and update them with your app data. This can be done using standard XML DOM manipulation techniques. The example code below locates the image folder and loads the path to an image file located in the project into it’s inner text. The code also creates a square tile that consists of text, updates it’s <text> element, and then imports and appends it into the wide tile’s XML.   HTML/JavaScript var image = tileXml.selectSingleNode('//image[@id="1"]'); image.setAttribute('src', 'ms-appx:///images/' + imageFile); image.setAttribute('alt', 'Live Tile'); var squareTemplate = notifications.TileTemplateType.tileSquareText04; var squareTileXml = notifications.TileUpdateManager.getTemplateContent(squareTemplate); var squareTileTextAttributes = squareTileXml.selectSingleNode('//text[@id="1"]'); squareTileTextAttributes.appendChild(squareTileXml.createTextNode(content)); var node = tileXml.importNode(squareTileXml.selectSingleNode('//binding'), true); tileXml.selectSingleNode('//visual').appendChild(node); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#var tileXml = TileUpdateManager.GetTemplateContent(template); var text = tileXml.SelectSingleNode("//text[@id='1']"); text.AppendChild(tileXml.CreateTextNode(content)); var image = (XmlElement)tileXml.SelectSingleNode("//image[@id='1']"); image.SetAttribute("src", "ms-appx:///Assets/" + imageFile); image.SetAttribute("alt", "Live Tile"); Debug.WriteLine(image.GetXml()); var squareTemplate = TileTemplateType.TileSquareText04; var squareTileXml = TileUpdateManager.GetTemplateContent(squareTemplate); var squareTileTextAttributes = squareTileXml.SelectSingleNode("//text[@id='1']"); squareTileTextAttributes.AppendChild(squareTileXml.CreateTextNode(content)); var node = tileXml.ImportNode(squareTileXml.SelectSingleNode("//binding"), true); tileXml.SelectSingleNode("//visual").AppendChild(node);  Step 4: Feed the modified tile XML into a new TileNotification instance Now that the XML data has been updated with the desired text and images, it’s time to load the XmlDocument object into a new TileNotification instance:   HTML/JavaScript var tileNotification = new notifications.TileNotification(tileXml); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#var tileNotification = new TileNotification(tileXml);  Step 5: Feed the TileNotification instance into the Update() method of the TileUpdateManager Once the TileNotification instance has been created and the XmlDocument has been passed to its constructor, it needs to be passed to the Update() method of a TileUpdator in order to be shown on the Windows 8 start screen:   HTML/JavaScript notifications.TileUpdateManager.createTileUpdaterForApplication().update(tileNotification); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#TileUpdateManager.CreateTileUpdaterForApplication().Update(tileNotification);    Once the tile notification is updated it’ll show up on the start screen. An example of the wide and square tiles created with the included demo code are shown next:     Download the HTML/JavaScript and XAML/C# sample application here. In the next post in this series I’ll walk through how to queue multiple tiles and clear a queue.

    Read the article

  • Email sent from Centos end up in user spam folder

    - by oObe
    I am facing this issue, I use the default postfix MTA in centos but the mail end up in user spam folder, but this does not seem to be a problem in Debian using exim4, both host have hostname and domain name configured, and relay mail through external smtp host. Both configuration and recieving email header are attached. The different seems that Debian has this additional (envelope tag) and (from) tag other than some minor syntax differences. Any help to resolve is appreciated. The IP address and DNS is masked as follow: 1.2.3.4 = My IP address smtp.host.com = external smtp host for my company [email protected] = account at smtp host centos.abc.com = Local centos server debian.abc.com = Local debian server Thanks. Centos main.cf config with the following params configured myhostname = centos.abc.com mydomain = abc.com myorigin = centos.abc.com relayhost = smtp.host.com Centos - User receiving mail header Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 13:36:49 +0800 Received: by centos.abc.com (Postfix, from userid 0) id 1E0637B89; Fri, 28 Sep 2012 13:36:39 +0800 (SGT) Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 13:36:49 +0800 Received: by centos.abc.com (Postfix, from userid 0) id 1E0637B89; Fri, 28 Sep 2012 13:36:39 +0800 (SGT) Date: Fri, 28 Sep 2012 13:36:39 +0800 To: [email protected] Subject: Test mail from centos User-Agent: Heirloom mailx 12.4 7/29/08 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-Id: <[email protected]> From: [email protected] (root) X-SmarterMail-TotalSpamWeight: 0 X-Antivirus: avast! (VPS 120926-1, 27/09/2012), Inbound message X-Antivirus-Status: Clean http://i.imgur.com/7WAYX.jpg Debain exim4 config .... # This is a Debian specific file dc_eximconfig_configtype='smarthost' dc_other_hostnames='debian.abc.com' dc_local_interfaces='127.0.0.1 ; ::1' dc_readhost='debian.abc.com' dc_relay_domains='smtp.host.com' dc_minimaldns='false' dc_relay_nets='127.0.0.1' dc_smarthost='smtp.host.com' CFILEMODE='644' dc_use_split_config='false' dc_hide_mailname='true' dc_mailname_in_oh='true' dc_localdelivery='mail_spool' debian - User receiving mail header Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 15:02:53 +0800 Received: from root by debian.abc.com with local (Exim 4.72) (envelope-from <[email protected]>) id 1TH86d-00010v-G9 for [email protected]; Thu, 27 Sep 2012 15:01:55 +0800 Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 15:02:53 +0800 Received: from root by debian.abc.com with local (Exim 4.72) (envelope-from <[email protected]>) id 1TH86d-00010v-G9 for [email protected]; Thu, 27 Sep 2012 15:01:55 +0800 Date: Thu, 27 Sep 2012 15:01:55 +0800 Message-Id: <[email protected]> To: [email protected] Subject: Test from debian From: root <[email protected]> X-SmarterMail-TotalSpamWeight: 0 X-Antivirus: avast! (VPS 120926-1, 27/09/2012), Inbound message X-Antivirus-Status: Clean http://imgur.com/nMsMA.jpg

    Read the article

  • Connecting SceneBuilder edited FXML to Java code

    - by daniel
    Recently I had to answer several questions regarding how to connect an UI built with the JavaFX SceneBuilder 1.0 Developer Preview to Java Code. So I figured out that a short overview might be helpful. But first, let me state the obvious. What is FXML? To make it short, FXML is an XML based declaration format for JavaFX. JavaFX provides an FXML loader which will parse FXML files and from that construct a graph of Java object. It may sound complex when stated like that but it is actually quite simple. Here is an example of FXML file, which instantiate a StackPane and puts a Button inside it: -- <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane prefHeight="150.0" prefWidth="200.0" xmlns:fx="http://javafx.com/fxml"> <children> <Button mnemonicParsing="false" text="Button" /> </children> </StackPane> ... and here is the code I would have had to write if I had chosen to do the same thing programatically: import javafx.scene.control.*; import javafx.scene.layout.*; ... final Button button = new Button("Button"); button.setMnemonicParsing(false); final StackPane stackPane = new StackPane(); stackPane.setPrefWidth(200.0); stackPane.setPrefHeight(150.0); stacPane.getChildren().add(button); As you can see - FXML is rather simple to understand - as it is quite close to the JavaFX API. So OK FXML is simple, but why would I use it?Well, there are several answers to that - but my own favorite is: because you can make it with SceneBuilder. What is SceneBuilder? In short SceneBuilder is a layout tool that will let you graphically build JavaFX user interfaces by dragging and dropping JavaFX components from a library, and save it as an FXML file. SceneBuilder can also be used to load and modify JavaFX scenegraphs declared in FXML. Here is how I made the small FXML file above: Start the JavaFX SceneBuilder 1.0 Developer Preview In the Library on the left hand side, click on 'StackPane' and drag it on the content view (the white rectangle) In the Library, select a Button and drag it onto the StackPane on the content view. In the Hierarchy Panel on the left hand side - select the StackPane component, then invoke 'Edit > Trim To Selected' from the menubar That's it - you can now save, and you will obtain the small FXML file shown above. Of course this is only a trivial sample, made for the sake of the example - and SceneBuilder will let you create much more complex UIs. So, I have now an FXML file. But what do I do with it? How do I include it in my program? How do I write my main class? Loading an FXML file with JavaFX Well, that's the easy part - because the piece of code you need to write never changes. You can download and look at the SceneBuilder samples if you need to get convinced, but here is the short version: Create a Java class (let's call it 'Main.java') which extends javafx.application.Application In the same directory copy/save the FXML file you just created using SceneBuilder. Let's name it "simple.fxml" Now here is the Java code for the Main class, which simply loads the FXML file and puts it as root in a stage's scene. /* * Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. */ package simple; import java.util.logging.Level; import java.util.logging.Logger; import javafx.application.Application; import javafx.fxml.FXMLLoader; import javafx.scene.Scene; import javafx.scene.layout.StackPane; import javafx.stage.Stage; public class Main extends Application { /** * @param args the command line arguments */ public static void main(String[] args) { Application.launch(Main.class, (java.lang.String[])null); } @Override public void start(Stage primaryStage) { try { StackPane page = (StackPane) FXMLLoader.load(Main.class.getResource("simple.fxml")); Scene scene = new Scene(page); primaryStage.setScene(scene); primaryStage.setTitle("FXML is Simple"); primaryStage.show(); } catch (Exception ex) { Logger.getLogger(Main.class.getName()).log(Level.SEVERE, null, ex); } } } Great! Now I only have to use my favorite IDE to compile the class and run it. But... wait... what does it do? Well nothing. It just displays a button in the middle of a window. There's no logic attached to it. So how do we do that? How can I connect this button to my application logic? Here is how: Connection to code First let's define our application logic. Since this post is only intended to give a very brief overview - let's keep things simple. Let's say that the only thing I want to do is print a message on System.out when the user clicks on my button. To do that, I'll need to register an action handler with my button. And to do that, I'll need to somehow get a handle on my button. I'll need some kind of controller logic that will get my button and add my action handler to it. So how do I get a handle to my button and pass it to my controller? Once again - this is easy: I just need to write a controller class for my FXML. With each FXML file, it is possible to associate a controller class defined for that FXML. That controller class will make the link between the UI (the objects defined in the FXML) and the application logic. To each object defined in FXML we can associate an fx:id. The value of the id must be unique within the scope of the FXML, and is the name of an instance variable inside the controller class, in which the object will be injected. Since I want to have access to my button, I will need to add an fx:id to my button in FXML, and declare an @FXML variable in my controller class with the same name. In other words - I will need to add fx:id="myButton" to my button in FXML: -- <Button fx:id="myButton" mnemonicParsing="false" text="Button" /> and declare @FXML private Button myButton in my controller class @FXML private Button myButton; // value will be injected by the FXMLLoader Let's see how to do this. Add an fx:id to the Button object Load "simple.fxml" in SceneBuilder - if not already done In the hierarchy panel (bottom left), or directly on the content view, select the Button object. Open the Properties sections of the inspector (right panel) for the button object At the top of the section, you will see a text field labelled fx:id. Enter myButton in that field and validate. Associate a controller class with the FXML file Still in SceneBuilder, select the top root object (in our case, that's the StackPane), and open the Code section of the inspector (right hand side) At the top of the section you should see a text field labelled Controller Class. In the field, type simple.SimpleController. This is the name of the class we're going to create manually. If you save at this point, the FXML will look like this: -- <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane prefHeight="150.0" prefWidth="200.0" xmlns:fx="http://javafx.com/fxml" fx:controller="simple.SimpleController"> <children> <Button fx:id="myButton" mnemonicParsing="false" text="Button" /> </children> </StackPane> As you can see, the name of the controller class has been added to the root object: fx:controller="simple.SimpleController" Coding the controller class In your favorite IDE, create an empty SimpleController.java class. Now what does a controller class looks like? What should we put inside? Well - SceneBuilder will help you there: it will show you an example of controller skeleton tailored for your FXML. In the menu bar, invoke View > Show Sample Controller Skeleton. A popup appears, displaying a suggestion for the controller skeleton: copy the code displayed there, and paste it into your SimpleController.java: /** * Sample Skeleton for "simple.fxml" Controller Class * Use copy/paste to copy paste this code into your favorite IDE **/ package simple; import java.net.URL; import java.util.ResourceBundle; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.control.Button; public class SimpleController implements Initializable { @FXML // fx:id="myButton" private Button myButton; // Value injected by FXMLLoader @Override // This method is called by the FXMLLoader when initialization is complete public void initialize(URL fxmlFileLocation, ResourceBundle resources) { assert myButton != null : "fx:id=\"myButton\" was not injected: check your FXML file 'simple.fxml'."; // initialize your logic here: all @FXML variables will have been injected } } Note that the code displayed by SceneBuilder is there only for educational purpose: SceneBuilder does not create and does not modify Java files. This is simply a hint of what you can use, given the fx:id present in your FXML file. You are free to copy all or part of the displayed code and paste it into your own Java class. Now at this point, there only remains to add our logic to the controller class. Quite easy: in the initialize method, I will register an action handler with my button: () { @Override public void handle(ActionEvent event) { System.out.println("That was easy, wasn't it?"); } }); ... -- ... // initialize your logic here: all @FXML variables will have been injected myButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { System.out.println("That was easy, wasn't it?"); } }); ... That's it - if you now compile everything in your IDE, and run your application, clicking on the button should print a message on the console! Summary What happens is that in Main.java, the FXMLLoader will load simple.fxml from the jar/classpath, as specified by 'FXMLLoader.load(Main.class.getResource("simple.fxml"))'. When loading simple.fxml, the loader will find the name of the controller class, as specified by 'fx:controller="simple.SimpleController"' in the FXML. Upon finding the name of the controller class, the loader will create an instance of that class, in which it will try to inject all the objects that have an fx:id in the FXML. Thus, after having created '<Button fx:id="myButton" ... />', the FXMLLoader will inject the button instance into the '@FXML private Button myButton;' instance variable found on the controller instance. This is because The instance variable has an @FXML annotation, The name of the variable exactly matches the value of the fx:id Finally, when the whole FXML has been loaded, the FXMLLoader will call the controller's initialize method, and our code that registers an action handler with the button will be executed. For a complete example, take a look at the HelloWorld SceneBuilder sample. Also make sure to follow the SceneBuilder Get Started guide, which will guide you through a much more complete example. Of course, there are more elegant ways to set up an Event Handler using FXML and SceneBuilder. There are also many different ways to work with the FXMLLoader. But since it's starting to be very late here, I think it will have to wait for another post. I hope you have enjoyed the tour! --daniel

    Read the article

  • Can not access network computers anymore

    - by Johny Skovdal
    Last Thursday (03/05/12) I got a new computer to be able to work from home. I plugged it, by cable, into the company network and installed most of the software needed for me to do so, by accessing a share on my stationary computer at work. I had no issues here what so ever, and everything just worked. Yesterday evening I tried accessing the company network trough Windows VPN, and while I was able to connect to the network, I was unable to connect to any computers on the network. I did, however, get an error when connecting, but I can't seem to get the error again, to get the details of the error message. Today I am sitting on the company network again, and now I can not access anything on the network like I could last Thursday, though I can ping all the computers I am attempting to access. Here is a list of details that might help in troubleshooting this issue (updated): List of observations / actions My computer is identical to another computer that has no issues. It is not on the domain but rather on the default workgroup, but this was not an issue last Thursday, so I am assuming it still is not. I am able to access my e-mail on the exchange server. I can connect to our TFS server from Visual Studio but not from Explorer. I can also connect to Database Servers and Remote Desktop. I can see several computers when browsing network computers, but I am unable to connect to any of them. When trying to connect to a computer I am consistently met with the error code "0x80070035" (network path not found). I also get the 0x80070035 error when double clicking the target computer from the Network UI. I am not met with a login dialog when trying to access a computer, as I should, since I am not on the domain. (I did login to both Exchange, Remote Desktop and TFS though) Between Thursday where it worked and Sunday evening where it did not, I have installed quite a few security updates, plus various tools etc. that I need for programming. I have tried accessing by computer name and ip and neither of them work. I can ping by computer name. I have deleted all (1 entry) stored network credentials. I am able to access my computer from the target computer. Client and Server can see each other on the network = Network Discovery is enabled. I am using the network profile "Work". When accessing the network through VPN, I am unable to get anything to work using computernames, but all of the above applies when using IP adresses instead of computername. I run Windows 7 Home Premium on my computer. Using powershell attempting to access a share I get the following error (ComputerName and ShareName being correct values of course): PS C:\Users\MyUser> cd \\ComputerName\ShareName Set-Location : Cannot find path '\\ComputerName\ShareName' because it does not exist. At line:1 char:3 + cd <<<< \\ComputerName\ShareName + CategoryInfo : ObjectNotFound: (\\ComputerName\ShareName:String) [Set-Location], ItemNotFoundException + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand However, ping'ing the same machine (ping ComputerName) from powershell I get response immediately. (As mentioned in the list of observations/actions, I tried the above with the IP address again on VPN, to get the same result) Conclusion So to sum up, pretty much the only thing I can not do, is access the other computers through browsing (explorer.exe, powershell, map networkdrive, etc.), which means that I am pretty much down to, that it is unable to resolve the path somehow, when trying to connect to other computers trough browsing, though the path gets resolved perfectly using all kinds of other services. Any recommendations as to what I can try next to resolve the issue? :)

    Read the article

  • Wi-Fi Stick with ZD1211 chip refuses to work on Ubuntu >8.10. No clue.

    - by Benjamin Maus
    I have a machine running Ubuntu 9.10 (Karmic *x86_64*). Everything is running smooth so far, except for the Wi-Fi USB Stick. The same device worked perfectly in 8.10. The wireless device is a GW-US54GXS using the Zydas Zd1211 chipset. Dmesg output after plugging in: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' [ 196.304209] zd1211rw 2-1:1.0: phy0 [ 196.304227] usbcore: registered new interface driver zd1211rw [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Syslog output: Nov 5 11:20:24 somesystem kernel: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' Nov 5 11:20:24 kierkegaard NetworkManager: <info> Found radio killswitch rfkill0 (at /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/ieee80211/phy0/rfkill0) (driver <unknown>) Nov 5 11:20:24 somesystem kernel: [ 196.304209] zd1211rw 2-1:1.0: phy0 Nov 5 11:20:24 somesystem kernel: [ 196.304227] usbcore: registered new interface driver zd1211rw Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): driver supports SSID scans (scan_capa 0x01). Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): new 802.11 WiFi device (driver: 'zd1211rw') Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/2 Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): now managed Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): device state change: 1 -> 2 (reason 2) Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): bringing up device. Nov 5 11:20:24 somesystem kernel: [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:24 somesystem kernel: [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 Nov 5 11:20:24 somesystem kernel: [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- Nov 5 11:20:24 somesystem NetworkManager: <WARN> nm_device_hw_bring_up(): (wlan0): device not up after timeout! Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): deactivating device (reason: 2). Nov 5 11:20:24 somesystem kernel: [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:29 somesystem wpa_supplicant[978]: Could not set interface 'wlan0' UP Nov 5 11:20:29 somesystem wpa_supplicant[978]: Failed to initialize driver interface Nov 5 11:20:29 somesystem NetworkManager: <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. Gnome tells me in the network menu that the device was "not ready". It appears in iwconfig but not in ifconfig. The same symptoms appear when I boot from the live CD. How can I solve this dilemma?

    Read the article

  • Nashorn in the Twitterverse, Continued

    - by jlaskey
    After doing the Twitter example, it seemed reasonable to try graphing the result with JavaFX.  At this time the Nashorn project doesn't have an JavaFX shell, so we have to go through some hoops to create an JavaFX application.  I thought showing you some of those hoops might give you some idea about what you can do mixing Nashorn and Java (we'll add a JavaFX shell to the todo list.) First, let's look at the meat of the application.  Here is the repackaged version of the original twitter example. var twitter4j      = Packages.twitter4j; var TwitterFactory = twitter4j.TwitterFactory; var Query          = twitter4j.Query; function getTrendingData() {     var twitter = new TwitterFactory().instance;     var query   = new Query("nashorn OR nashornjs");     query.since("2012-11-21");     query.count = 100;     var data = {};     do {         var result = twitter.search(query);         var tweets = result.tweets;         for each (tweet in tweets) {             var date = tweet.createdAt;             var key = (1900 + date.year) + "/" +                       (1 + date.month) + "/" +                       date.date;             data[key] = (data[key] || 0) + 1;         }     } while (query = result.nextQuery());     return data; } Instead of just printing out tweets, getTrendingData tallies "tweets per date" during the sample period (since "2012-11-21", the date "New Project: Nashorn" was posted.)   getTrendingData then returns the resulting tally object. Next, use JavaFX BarChart to display that data. var javafx         = Packages.javafx; var Stage          = javafx.stage.Stage var Scene          = javafx.scene.Scene; var Group          = javafx.scene.Group; var Chart          = javafx.scene.chart.Chart; var FXCollections  = javafx.collections.FXCollections; var ObservableList = javafx.collections.ObservableList; var CategoryAxis   = javafx.scene.chart.CategoryAxis; var NumberAxis     = javafx.scene.chart.NumberAxis; var BarChart       = javafx.scene.chart.BarChart; var XYChart        = javafx.scene.chart.XYChart; var Series         = XYChart.Series; var Data           = XYChart.Data; function graph(stage, data) {     var root = new Group();     stage.scene = new Scene(root);     var dates = Object.keys(data);     var xAxis = new CategoryAxis();     xAxis.categories = FXCollections.observableArrayList(dates);     var yAxis = new NumberAxis("Tweets", 0.0, 200.0, 50.0);     var series = FXCollections.observableArrayList();     for (var date in data) {         series.add(new Data(date, data[date]));     }     var tweets = new Series("Tweets", series);     var barChartData = FXCollections.observableArrayList(tweets);     var chart = new BarChart(xAxis, yAxis, barChartData, 25.0);     root.children.add(chart); } I should point out that there is a lot of subtlety going on in the background.  For example; stage.scene = new Scene(root) is equivalent to stage.setScene(new Scene(root)). If Nashorn can't find a property (scene), then it searches (via Dynalink) for the Java Beans equivalent (setScene.)  Also note, that Nashorn is magically handling the generic class FXCollections.  Finally,  with the call to observableArrayList(dates), Nashorn is automatically converting the JavaScript array dates to a Java collection.  It really is hard to identify which objects are JavaScript and which are Java.  Does it really matter? Okay, with the meat out of the way, let's talk about the hoops. When working with JavaFX, you start with a main subclass of javafx.application.Application.  This class handles the initialization of the JavaFX libraries and the event processing.  This is what I used for this example; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import javafx.application.Application; import javafx.stage.Stage; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class TrendingMain extends Application { private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); private Trending trending; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { trending = (Trending) load("Trending.js"); trending.start(stage); } @Override public void stop() throws Exception { trending.stop(); } private Object load(String script) throws IOException, ScriptException { try (final InputStream is = TrendingMain.class.getResourceAsStream(script)) { return engine.eval(new InputStreamReader(is, "utf-8")); } } } To initialize Nashorn, we use JSR-223's javax.script.  private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); This code sets up an instance of the Nashorn engine for evaluating scripts. The  load method reads a script into memory and then gets engine to eval that script.  Note, that load also returns the result of the eval. Now for the fun part.  There are several different approaches we could use to communicate between the Java main and the script.  In this example we'll use a Java interface.  The JavaFX main needs to do at least start and stop, so the following will suffice as an interface; public interface Trending {     public void start(Stage stage) throws Exception;     public void stop() throws Exception; } At the end of the example's script we add; (function newTrending() {     return new Packages.Trending() {         start: function(stage) {             var data = getTrendingData();             graph(stage, data);             stage.show();         },         stop: function() {         }     } })(); which instantiates a new subclass instance of Trending and overrides the start and stop methods.  The result of this function call is what is returned to main via the eval. trending = (Trending) load("Trending.js"); To recap, the script Trending.js contains functions getTrendingData, graph and newTrending, plus the call at the end to newTrending.  Back in the Java code, we cast the result of the eval (call to newTrending) to Trending, thus, we end up with an object that we can then use to call back into the script.  trending.start(stage); Voila. ?

    Read the article

  • VPS 512 MB RAM with WordPressMU comes to consumes lots of memory

    - by CAPitalZ
    I have googled for days and gathered all optimization suggestions and tried. My sites are not getting any high hits. May be like 100 hits per day [all my sites combined]. Here are my specs I have 512 MB RAM VPS with burstable 1024 MB. Centos 5 32-bit & cPanel/WHM Apache 2.2 MySQL 5.0 PHP 5.3.2 Here is my Configs I have 2 WordPressMU production sites, and 1 test site my.cnf # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/lib/mysql/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking skip-bdb skip-innodb key_buffer = 16M max_allowed_packet = 1M table_cache = 64 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M #CAPitalZ thread_cache_size=8 thread_concurrency=4 #query_cache_type=1 #query_cache_limit=1M query_cache_size=16M concurrent_insert=2 low_priority_updates=1 max_connections=50 tmp_table_size=16M max_heap_table_size=16M join_buffer_size=1M interactive_timeout=25 wait_timeout=1000 #connect_timout=10 not able to restart mysql max_connect_errors=10 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # skip-networking # Disable Federated by default skip-federated # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 [mysqld_safe] open_files_limit=8192 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout httpd.conf I have unselected many modules and recompiled using EasyApache in WHM. Only have the following modules built Deflate Expires Fileprotect Imagemap MPM Prefork Version [default] EAccelerator for PHP Bcmath Calendar CurlSSL [I'm using Curl. But I don't have any https sites] Expat GD [for image cropping] Gettext Imap Mbregex [default] Mbstring [need both Mbregex and Mbstring for utf-8] Mysql of the system MySQL "Improved" extension. Sockets TTF (FreeType) [I'm using custom font] Zlib Under Global Configuration I only have FollowSymLinks enabled I Have TraceEnable, ServerSignature, FileETag OFF ServerTokens ProductOnly DirectoryIndex Priority has index.php as the first one I have removed Clamd [Clam Anti-virus] SpamAssasin is Off Under Tweak Settings Default catch-all/default address behavior for new accounts. This is set to "fail" All stats programs turned off I have eAccelerator installed and checked in phpinfo and its working [Pre VirtualHost Include under WHM] Timeout 20 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 3 MinSpareServers 1 MaxSpareServers 3 StartServers 1 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 4000 ExtendedStatus Off #ServerType standalone this throws error HostnameLookups Off <Directory "/"> AllowOverride None </Directory> My sites will take ages to load and WHM/CPanel will not even load. adadaa.com/ http://adadaa.net/ kadais.ca/ My average memory consumption is like 1000 MB! [yes always bursting] The process that consumes most CPU and also most memory is mysql But I also get like 15 httpd processes [when its bursting] I already got warning from cpuwatchcheck saying "While processing, the cpu has been maxed out for more than a 6 hour period. The current load/uptime line on the server at the time of this email is 07:00:37 up 11:30, 0 users, load average: 14.64, 16.79, 20.07" I don't know, I have tried switching these config values many different times, but nothing seems to work. Please show some light... Thanks

    Read the article

  • Error on 64 Bit Install of IIS &ndash; LoadLibraryEx failed on aspnet_filter.dll

    - by Rick Strahl
    I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS: Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box. The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component. After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks): Switch IIS to run in 32 bit mode Fix the filter listing in ApplicationHost.config Switching IIS to allow 32 Bit Code This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications: And voila the error message above goes away. Fix Filters Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-) So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see: Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder. Aha! Hence lies our problem. You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice. Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work). To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this: which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries: <filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" /> <filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" /> <filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" /> The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from. The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active. The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries. So how did this get misconfigured? I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box. In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry. Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • ActiveX component can't create Object Error? Check 64 bit Status

    - by Rick Strahl
    If you're running on IIS 7 and a 64 bit operating system you might run into the following error using ASP classic or ASP.NET with COM interop. In classic ASP applications the error will show up as: ActiveX component can't create object   (Error 429) (actually without error handling the error just shows up as 500 error page) In my case the code that's been giving me problems has been a FoxPro COM object I'd been using to serve banner ads to some of my pages. The code basically looks up banners from a database table and displays them at random. The ASP classic code that uses it looks like this: <% Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> Originally this code had no specific error checking as above so the ASP pages just failed with 500 error pages from the Web server. To find out what the problem is this code is more useful at least for debugging: <% ON ERROR RESUME NEXT Set banner = Server.CreateObject("wwBanner.aspBanner") Response.Write(err.Number & " - " & err.Description) banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> which results in: 429 - ActiveX component can't create object which at least gives you a slight clue. In ASP.NET invoking the same COM object with code like this: <% dynamic banner = wwUtils.CreateComInstance("wwBanner.aspBanner") as dynamic; banner.cBANNERFILE = "wwsitebanners"; Response.Write(banner.getBanner(-1)); %> results in: Retrieving the COM class factory for component with CLSID {B5DCBB81-D5F5-11D2-B85E-00600889F23B} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). The class is in fact registered though and the COM server loads fine from a command prompt or other COM client. This error can be caused by a COM server that doesn't load. It looks like a COM registration error. There are a number of traditional reasons why this error can crop up of course. The server isn't registered (run regserver32 to register a DLL server or /regserver on an EXE server) Access permissions aren't set on the COM server (Web account has to be able to read the DLL ie. Network service) The COM server fails to load during initialization ie. failing during startup One thing I always do to check for COM errors fire up the server in a COM client outside of IIS and ensure that it works there first - it's almost always easier to debug a server outside of the Web environment. In my case I tried the server in Visual FoxPro on the server with: loBanners = CREATEOBJECT("wwBanner.aspBanner") loBanners.cBannerFile = "wwsitebanners" ? loBanners.GetBanner(-1) and it worked just fine. If you don't have a full dev environment on the server you can also use VBScript do the same thing and run the .vbs file from the command prompt: Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" MsgBox(banner.getBanner(-1)) Since this both works it tells me the server is registered and working properly. This leaves startup failures or permissions as the problem. I double checked permissions for the Application Pool and the permissions of the folder where the DLL lives and both are properly set to allow access by the Application Pool impersonated user. Just to be sure I assigned an Admin user to the Application Pool but still no go. So now what? 64 bit Servers Ahoy A couple of weeks back I had set up a few of my Application pools to 64 bit mode. My server is Server 2008 64 bit and by default Application Pools run 64 bit. Originally when I installed the server I set up most of my Application Pools to 32 bit mainly for backwards compatibility. But as more of my code migrates to 64 bit OS's I figured it'd be a good idea to see how well code runs under 64 bit code. The transition has been mostly painless. Until today when I noticed the problem with the code above when scrolling to my IIS logs and noticing a lot of 500 errors on many of my ASP classic pages. The code in question in most of these pages deals with this single simple COM object. It took a while to figure out that the problem is caused by the Application Pool running in 64 bit mode. The issue is that 32 bit COM objects (ie. my old Visual FoxPro COM component) cannot be loaded in a 64 bit Application Pool. The ASP pages using this COM component broke on the day I switched my main Application Pool into 64 bit mode but I didn't find the problem until I searched my logs for errors by pure chance. To fix this is easy enough once you know what the problem is by switching the Application Pool to Enable 32-bit Applications: Once this is done the COM objects started working correctly again. 64 bit ASP and ASP.NET with DCOM Servers This is kind of off topic, but incidentally it's possible to load 32 bit DCOM (out of process) servers from ASP.NET and ASP classic even if those applications run in 64 bit application pools. In fact, in West Wind Web Connection I use this capability to run a 64 bit ASP.NET handler that talks to a 32 bit FoxPro COM server which allows West Wind Web Connection to run in native 64 bit mode without custom configuration (which is actually quite useful). It's probably not a common usage scenario but it's good to know that you can actually access 32 bit COM objects this way from ASP.NET. For West Wind Web Connection this works out well as the DCOM interface only makes one non-chatty call to the backend server that handles all the rest of the request processing. Application Pool Isolation is your Friend For me the recent incident of failure in the classic ASP pages has just been another reminder to be very careful with moving applications to 64 bit operation. There are many little traps when switching to 64 bit that are very difficult to track and test for. I described one issue I had a couple of months ago where one of the default ASP.NET filters was loading the wrong version (32bit instead of 64bit) which was extremely difficult to track down and was caused by a very sneaky configuration switch error (basically 3 different entries for the same ISAPI filter all with different bitness settings). It took me almost a full day to track this down). Recently I've been taken to isolate individual applications into separate Application Pools rather than my past practice of combining many apps into shared AppPools. This is a good practice assuming you have enough memory to make this work. Application Pool isolate provides more modularity and allows me to selectively move applications to 64 bit. The error above came about precisely because I moved one of my most populous app pools to 64 bit and forgot about the minimal COM object use in some of my old pages. It's easy to forget. To 64bit or Not Is it worth it to move to 64 bit? Currently I'd say -not really. In my - admittedly limited - testing I don't see any significant performance increases. In fact 64 bit apps just seem to consume considerably more memory (30-50% more in my pools on average) and performance is minimally improved (less than 5% at the very best) in the load testing I've performed on a couple of sites in both modes. The only real incentive for 64 bit would be applications that require huge data spaces that exceed the 32 bit 4 gigabyte memory limit. However I have a hard time imagining an application that needs 4 gigs of memory in a single Application Pool :-). Curious to hear other opinions on benefits of 64 bit operation. © Rick Strahl, West Wind Technologies, 2005-2011Posted in COM   ASP.NET  FoxPro  

    Read the article

  • How to configure hibernate-tools with maven to generate hibernate.cfg.xml, *.hbm.xml, POJOs and DAOs

    - by mmm
    Hi, can any one tell me how to force maven to precede mapping .hbm.xml files in the automatically generated hibernate.cfg.xml file with package path? My general idea is, I'd like to use hibernate-tools via maven to generate the persistence layer for my application. So, I need the hibernate.cfg.xml, then all my_table_names.hbm.xml and at the end the POJO's generated. Yet, the hbm2java goal won't work as I put *.hbm.xml files into the src/main/resources/package/path/ folder but hbm2cfgxml specifies the mapping files only by table name, i.e.: <mapping resource="MyTableName.hbm.xml" /> So the big question is: how can I configure hbm2cfgxml so that hibernate.cfg.xml looks like below: <mapping resource="package/path/MyTableName.hbm.xml" /> My pom.xml looks like this at the moment: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <id>hbm2cfgxml</id> <phase>generate-sources</phase> <goals> <goal>hbm2cfgxml</goal> </goals> <inherited>false</inherited> <configuration> <components> <component> <name>hbm2cfgxml</name> <implemetation>jdbcconfiguration</implementation> <outputDirectory>src/main/resources/</outputDirectory> </component> </components> <componentProperties> <packagename>package.path</packageName> <configurationFile>src/main/resources/hibernate.cfg.xml</configurationFile> </componentProperties> </configuration> </execution> </executions> </plugin> And then the second question: is there a way to tell maven to copy resources to the target folder before executing hbm2java? At the moment I'm using mvn clean resources:resources generate-sources for that, but there must be a better way. Thanks for any help. Update: @Pascal: Thank you for your help. The path to mappings works fine now, I don't know what was wrong before, though. Maybe there is some issue with writing to hibernate.cfg.xml while reading database config from it (though the file gets updated). I've deleted the file hibernate.cfg.xml, replaced it with database.properties and run the goals hbm2cfgxml and hbm2hbmxml. I also don't use the outputDirectory nor configurationfile in those goals anymore. As a result the files hibernate.cfg.xml and all *.hbm.xml are being generated into my target/hibernate3/generated-mappings/ folder, which is the default value. Then I updated the hbm2java goal with the following: <componentProperties> <packagename>package.name</packagename> <configurationfile>target/hibernate3/generated-mappings/hibernate.cfg.xml</configurationfile> </componentProperties> But then I get the following: [INFO] --- hibernate3-maven-plugin:2.2:hbm2java (hbm2java) @ project.persistence --- [INFO] using configuration task. [INFO] Configuration XML file loaded: file:/C:/Documents%20and%20Settings/mmm/workspace/project.persistence/target/hibernate3/generated-mappings/hibernate.cfg.xml 12:15:17,484 INFO org.hibernate.cfg.Configuration - configuring from url: file:/C:/Documents%20and%20Settings/mmm/workspace/project.persistence/target/hibernate3/generated-mappings/hibernate.cfg.xml 12:15:19,046 INFO org.hibernate.cfg.Configuration - Reading mappings from resource : package.name/Messages.hbm.xml [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.codehaus.mojo:hibernate3-maven-plugin:2.2:hbm2java (hbm2java) on project project.persistence: Execution hbm2java of goal org.codehaus.mojo:hibernate3-maven-plugin:2.2:hbm2java failed: resource: package/name/Messages.hbm.xml not found How do I deal with that? Of course I could add: <outputDirectory>src/main/resources/package/name</outputDirectory> to the hbm2hbmxml goal, but I think this is not the best approach, or is it? Is there a way to keep all the generated code and resources away from the src/ folder? I assume, the goal of this approach is not to generate any sources into my src/main/java or /resources folder, but to keep the generated code in the target folder. As I generally agree with this point of view, I'd like to continue with that eventually executing hbm2dao and packaging the project to be used as a generated persistence layer component from the business layer. Is this also what you meant?

    Read the article

  • AD Password About to Expire check problem with ASP.Net

    - by Vince
    Hello everyone, I am trying to write some code to check the AD password age during a user login and notify them of the 15 remaining days. I am using the ASP.Net code that I found on the Microsoft MSDN site and I managed to add a function that checks the if the account is set to change password at next login. The login and the change password at next login works great but I am having some problems with the check for the password age. This is the VB.Net code for the DLL file: Imports System Imports System.Text Imports System.Collections Imports System.DirectoryServices Imports System.DirectoryServices.AccountManagement Imports System.Reflection 'Needed by the Password Expiration Class Only -Vince Namespace FormsAuth Public Class LdapAuthentication Dim _path As String Dim _filterAttribute As String 'Code added for the password expiration added by Vince Private _domain As DirectoryEntry Private _passwordAge As TimeSpan = TimeSpan.MinValue Const UF_DONT_EXPIRE_PASSWD As Integer = &H10000 'Function added by Vince Public Sub New() Dim root As New DirectoryEntry("LDAP://rootDSE") root.AuthenticationType = AuthenticationTypes.Secure _domain = New DirectoryEntry("LDAP://" & root.Properties("defaultNamingContext")(0).ToString()) _domain.AuthenticationType = AuthenticationTypes.Secure End Sub 'Function added by Vince Public ReadOnly Property PasswordAge() As TimeSpan Get If _passwordAge = TimeSpan.MinValue Then Dim ldate As Long = LongFromLargeInteger(_domain.Properties("maxPwdAge")(0)) _passwordAge = TimeSpan.FromTicks(ldate) End If Return _passwordAge End Get End Property Public Sub New(ByVal path As String) _path = path End Sub 'Function added by Vince Public Function DoesUserHaveToChangePassword(ByVal userName As String) As Boolean Dim ctx As PrincipalContext = New PrincipalContext(System.DirectoryServices.AccountManagement.ContextType.Domain) Dim up = UserPrincipal.FindByIdentity(ctx, userName) Return (Not up.LastPasswordSet.HasValue) 'returns true if last password set has no value. End Function Public Function IsAuthenticated(ByVal domain As String, ByVal username As String, ByVal pwd As String) As Boolean Dim domainAndUsername As String = domain & "\" & username Dim entry As DirectoryEntry = New DirectoryEntry(_path, domainAndUsername, pwd) Try 'Bind to the native AdsObject to force authentication. Dim obj As Object = entry.NativeObject Dim search As DirectorySearcher = New DirectorySearcher(entry) search.Filter = "(SAMAccountName=" & username & ")" search.PropertiesToLoad.Add("cn") Dim result As SearchResult = search.FindOne() If (result Is Nothing) Then Return False End If 'Update the new path to the user in the directory. _path = result.Path _filterAttribute = CType(result.Properties("cn")(0), String) Catch ex As Exception Throw New Exception("Error authenticating user. " & ex.Message) End Try Return True End Function Public Function GetGroups() As String Dim search As DirectorySearcher = New DirectorySearcher(_path) search.Filter = "(cn=" & _filterAttribute & ")" search.PropertiesToLoad.Add("memberOf") Dim groupNames As StringBuilder = New StringBuilder() Try Dim result As SearchResult = search.FindOne() Dim propertyCount As Integer = result.Properties("memberOf").Count Dim dn As String Dim equalsIndex, commaIndex Dim propertyCounter As Integer For propertyCounter = 0 To propertyCount - 1 dn = CType(result.Properties("memberOf")(propertyCounter), String) equalsIndex = dn.IndexOf("=", 1) commaIndex = dn.IndexOf(",", 1) If (equalsIndex = -1) Then Return Nothing End If groupNames.Append(dn.Substring((equalsIndex + 1), (commaIndex - equalsIndex) - 1)) groupNames.Append("|") Next Catch ex As Exception Throw New Exception("Error obtaining group names. " & ex.Message) End Try Return groupNames.ToString() End Function 'Function added by Vince Public Function WhenExpires(ByVal username As String) As TimeSpan Dim ds As New DirectorySearcher(_domain) ds.Filter = [String].Format("(&(objectClass=user)(objectCategory=person)(sAMAccountName={0}))", username) Dim sr As SearchResult = FindOne(ds) Dim user As DirectoryEntry = sr.GetDirectoryEntry() Dim flags As Integer = CInt(user.Properties("userAccountControl").Value) If Convert.ToBoolean(flags And UF_DONT_EXPIRE_PASSWD) Then 'password never expires Return TimeSpan.MaxValue End If 'get when they last set their password Dim pwdLastSet As DateTime = DateTime.FromFileTime(LongFromLargeInteger(user.Properties("pwdLastSet").Value)) ' return pwdLastSet.Add(PasswordAge).Subtract(DateTime.Now); If pwdLastSet.Subtract(PasswordAge).CompareTo(DateTime.Now) > 0 Then Return pwdLastSet.Subtract(PasswordAge).Subtract(DateTime.Now) Else Return TimeSpan.MinValue 'already expired End If End Function 'Function added by Vince Private Function LongFromLargeInteger(ByVal largeInteger As Object) As Long Dim type As System.Type = largeInteger.[GetType]() Dim highPart As Integer = CInt(type.InvokeMember("HighPart", BindingFlags.GetProperty, Nothing, largeInteger, Nothing)) Dim lowPart As Integer = CInt(type.InvokeMember("LowPart", BindingFlags.GetProperty, Nothing, largeInteger, Nothing)) Return CLng(highPart) << 32 Or CUInt(lowPart) End Function 'Function added by Vince Private Function FindOne(ByVal searcher As DirectorySearcher) As SearchResult Dim sr As SearchResult = Nothing Dim src As SearchResultCollection = searcher.FindAll() If src.Count > 0 Then sr = src(0) End If src.Dispose() Return sr End Function End Class End Namespace And this is the Login.aspx page: sub Login_Click(sender as object,e as EventArgs) Dim adPath As String = "LDAP://DC=xxx,DC=com" 'Path to your LDAP directory server Dim adAuth As LdapAuthentication = New LdapAuthentication(adPath) Try If (True = adAuth.DoesUserHaveToChangePassword(txtUsername.Text)) Then Response.Redirect("passchange.htm") ElseIf (True = adAuth.IsAuthenticated(txtDomain.Text, txtUsername.Text, txtPassword.Text)) Then Dim groups As String = adAuth.GetGroups() 'Create the ticket, and add the groups. Dim isCookiePersistent As Boolean = chkPersist.Checked Dim authTicket As FormsAuthenticationTicket = New FormsAuthenticationTicket(1, _ txtUsername.Text, DateTime.Now, DateTime.Now.AddMinutes(60), isCookiePersistent, groups) 'Encrypt the ticket. Dim encryptedTicket As String = FormsAuthentication.Encrypt(authTicket) 'Create a cookie, and then add the encrypted ticket to the cookie as data. Dim authCookie As HttpCookie = New HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket) If (isCookiePersistent = True) Then authCookie.Expires = authTicket.Expiration End If 'Add the cookie to the outgoing cookies collection. Response.Cookies.Add(authCookie) 'Retrieve the password life Dim t As TimeSpan = adAuth.WhenExpires(txtUsername.Text) 'You can redirect now. If (passAge.Days = 90) Then errorLabel.Text = "Your password will expire in " & DateTime.Now.Subtract(t) 'errorLabel.Text = "This is" 'System.Threading.Thread.Sleep(5000) Response.Redirect("http://somepage.aspx") Else Response.Redirect(FormsAuthentication.GetRedirectUrl(txtUsername.Text, False)) End If Else errorLabel.Text = "Authentication did not succeed. Check user name and password." End If Catch ex As Exception errorLabel.Text = "Error authenticating. " & ex.Message End Try End Sub ` Every time I have this Dim t As TimeSpan = adAuth.WhenExpires(txtUsername.Text) enabled, I receive "Arithmetic operation resulted in an overflow." during the login and won't continue. What am I doing wrong? How can I correct this? Please help!! Thank you very much for any help in advance. Vince

    Read the article

  • Easily use google maps, openstreet maps etc offline.

    - by samkea
    I did it and i am going to explain step by step. The explanatination may appear long but its simple if you follow. Note: All the softwares i have used are the latest and i have packaged them and provided them in the link below. I use Nokia N96 1) RootSign smartComGPS and install it on your phone(i havent provided the signer so that u wuld do some little work. i used Secman' rootsign). 2) Install Universal Maps Downloader, SmartCom OGF2 converter and OziExplorer 3.95.4s on my PC. a) UMD is used to download map tiles from any map source like googlemaps,opensourcemaps etc... and also combine the tiles into an image file like png,jpg,bmp etc... b) SmartCom OGF2 converter is used to convert the image file into a format usable on your mobile phone. c) OziExplorer will help you to calibrate the usable map file so that it can be used with GPS on your mobile phone without the use of internet. 3) Go to google maps or where u pick your maps and pan to the area of your interest. Zoom the map to at least 15 or 16 zoom level where you can see your area clearly and the streets. 4) copy this script in a notepad file and save it on your desktop: javascript:void(prompt('',gApplication.getMap().ge tCenter())); 5) Open the universal maps downloader. You will notice that you are required to add the: left longitude, right longitude,top latitude, bottom latitude. 6) On your map in google maps, doubleclick on the your prefered to most middle point. you will notice that the map will center in that area. 7) copy the script and paste it in the address bar then press enter. You will notice that a dialog with your (top latitude) and longitude respectively pops up. 8) copy the top latitude ONLY and paste it in the corresponding textbox in the UMD. 9) repeat steps 6-7 for the botton latitude. 10)repeat steps 6-7 for left longitude and right longitude too, but u have to copy the longitudes here. (***BTW record these points in the text file as they may be needed later in calibration) 11) Give the zoom level to the same zoom level that you prefered in google maps. 12) Dont forget to choose a path to save your files and under options set the proxy connection settings in UMD if you are using so. 13) Click on start and bingo! there you have your image tiles and a file with an extension .umd will be saved in the same folder. 14) On the UMD, go to tools, click on MapViewer and choose the .umd file. you will now see your map in one piece....and you will smile! 15) Still go to tools and click on map combiner. A dialog will popup for you to choose the .umd file and to enter the IMAGE file name. u can use another extension for the image file like png, jpg etc...i usually use png. 16) Combine.....bingo! there u go! u have an IMAGE file for your map. *I SUGGEST THAT CREATE A .BMP FILE and A .PNG file* 17) Close UMD and open SmartCom OGF2 converter. 18) Choose your .png image and create an ogf2 file. 19) Connect your phone to your PC in Mass Memory mode and transfer the file to the smartComGPS\Maps folder. 20) Now disconnect your phone and load smartComGPS. it will load the map and propt you to add a calibration point. Go ahead and add one calibration point with dummy coordinates. You will notice that it will add another file with extension .map in the smartComGPS\Maps folder. 21) Connect yiur ohone and copy that file and paste it in your working folder on your PC. Delete that .map file from the phone too because you are going to edit it from your PC and put it back. 22) Now Open the OziExplorer, go to file-->Load and Calibrate Map Image. 23) Choose the .bmp image and bingo! it will load with your map in the same zoom level. 24) Now you are going to calibrate. Use the MapView window and take the small box locater to all the 4 cornners of the map. You will notice that the map in the back ground moves to that area too. 25)On the right side, select the Point1 tab. Now you are in calibration mode. Now move the red box in mapview in the left upper corner to calibrate point1. 26) out of mapview go to the the left upper corner of the background map and choose poit (0,0) and your 1st calibration point. You will notice that these X,Y cordinated will be reflected in the Point1 image cordinates. 27) now go back to the text file where you saved your coordibates and enter the top latitude and the left longitude in the corresponding places. 28) Repeat steps 25-27 for point2,point3,point4 and click on save. Thats it, you have calibrated your image and you are about to finish. 29) Go to save and a dilaog which prompts you to save a .map file will poop up. Do save the map file in your working folder. 30) Right click that .map file and edit the filename in the .map file to remove the pc's directory structure. Eg. Change C\OziExplorer\data\Kampala.bmp to Kampala.ogf2. 31) Save the .map file in the smartComGPS\Maps folder on your phone. 32) now open smartComGPS on your phone and bingo! there is your map with GPS capability and in the same zoom level. 33) In smartComGPS options, choose connect and simulate. By now you should be smiling. Whoa! Hope i was of help. i case you get a problem, please inform me Below is the link to the software. regards. http://rapidshare.com/files/230296037/Utilities_Used.rar.html Ok, the Rapidshare files i posted are gone, so you will have to download as described in the solution. If you need more help, go here: http://www.dotsis.com/mobile_phone/sitemap/t-160491.html Some months later, someone else gave almost the same kind of solution here. http://www.dotsis.com/mobile_phone/sitemap/t-180123.html Note: the solutions were mean't to help view maps on Symbian phones, but i think now they ca even do for Windows Phones, iphones and others so read, extract what you want and use it. Hope it helps. Sam Kea

    Read the article

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

  • Multi-tenant ASP.NET MVC - Views

    - by zowens
    Part I – Introduction Part II – Foundation Part III – Controllers   So far we have covered the basic premise of tenants and how they will be delegated. Now comes a big issue with multi-tenancy, the views. In some applications, you will not have to override views for each tenant. However, one of my requirements is to add extra views (and controller actions) along with overriding views from the core structure. This presents a bit of a problem in locating views for each tenant request. I have chosen quite an opinionated approach at the present but will coming back to the “views” issue in a later post. What’s the deal? The path I’ve chosen is to use precompiled Spark views. I really love Spark View Engine and was planning on using it in my project anyways. However, I ran across a really neat aspect of the source when I was having a look under the hood. There’s an easy way to hook in embedded views from your project. There are solutions that provide this, but they implement a special Virtual Path Provider. While I think this is a great solution, I would rather just have Spark take care of the view resolution. The magic actually happens during the compilation of the views into a bin-deployable DLL. After the views are compiled, the are simply pulled out of the views DLL. Each tenant has its own views DLL that just has “.Views” appended after the assembly name as a convention. The list of reasons for this approach are quite long. The primary motivation is performance. I’ve had quite a few performance issues in the past and I would like to increase my application’s performance in any way that I can. My customized build of Spark removes insignificant whitespace from the HTML output so I can some some bandwidth and load time without having to deal with whitespace removal at runtime.   How to setup Tenants for the Host In the source, I’ve provided a single tenant as a sample (Sample1). This will serve as a template for subsequent tenants in your application. The first step is to add a “PostBuildStep” installer into the project. I’ve defined one in the source that will eventually change as we focus more on the construction of dependency containers. The next step is to tell the project to run the installer and copy the DLL output to a folder in the host that will pick up as a tenant. Here’s the code that will achieve it (this belongs in Post-build event command line field in the Build Events tab of settings) %systemroot%\Microsoft.NET\Framework\v4.0.30319\installutil "$(TargetPath)" copy /Y "$(TargetDir)$(TargetName)*.dll" "$(SolutionDir)Web\Tenants\" copy /Y "$(TargetDir)$(TargetName)*.pdb" "$(SolutionDir)Web\Tenants\" The DLLs with a name starting with the target assembly name will be copied to the “Tenants” folder in the web project. This means something like MultiTenancy.Tenants.Sample1.dll and MultiTenancy.Tenants.Sample1.Views.dll will both be copied along with the debug symbols. This is probably the simplest way to go about this, but it is a tad inflexible. For example, what if you have dependencies? The preferred method would probably be to use IL Merge to merge your dependencies with your target DLL. This would have to be added in the build events. Another way to achieve that would be to simply bypass Visual Studio events and use MSBuild.   I also got a question about how I was setting up the controller factory. Here’s the basics on how I’m setting up tenants inside the host (Global.asax) protected void Application_Start() { RegisterRoutes(RouteTable.Routes); // create a container just to pull in tenants var topContainer = new Container(); topContainer.Configure(config => { config.Scan(scanner => { scanner.AssembliesFromPath(Path.Combine(Server.MapPath("~/"), "Tenants")); scanner.AddAllTypesOf<IApplicationTenant>(); }); }); // create selectors var tenantSelector = new DefaultTenantSelector(topContainer.GetAllInstances<IApplicationTenant>()); var containerSelector = new TenantContainerResolver(tenantSelector); // clear view engines, we don't want anything other than spark ViewEngines.Engines.Clear(); // set view engine ViewEngines.Engines.Add(new TenantViewEngine(tenantSelector)); // set controller factory ControllerBuilder.Current.SetControllerFactory(new ContainerControllerFactory(containerSelector)); } The code to setup the tenants isn’t actually that hard. I’m utilizing assembly scanners in StructureMap as a simple way to pull in DLLs that are not in the AppDomain. Remember that there is a dependency on the host in the tenants and a tenant cannot simply be referenced by a host because of circular dependencies.   Tenant View Engine TenantViewEngine is a simple delegator to the tenant’s specified view engine. You might have noticed that a tenant has to define a view engine. public interface IApplicationTenant { .... IViewEngine ViewEngine { get; } } The trick comes in specifying the view engine on the tenant side. Here’s some of the code that will pull views from the DLL. protected virtual IViewEngine DetermineViewEngine() { var factory = new SparkViewFactory(); var file = GetType().Assembly.CodeBase.Without("file:///").Replace(".dll", ".Views.dll").Replace('/', '\\'); var assembly = Assembly.LoadFile(file); factory.Engine.LoadBatchCompilation(assembly); return factory; } This code resides in an abstract Tenant where the fields are setup in the constructor. This method (inside the abstract class) will load the Views assembly and load the compilation into Spark’s “Descriptors” that will be used to determine views. There is some trickery on determining the file location… but it works just fine.   Up Next There’s just a few big things left such as StructureMap configuring controllers with a convention instead of specifying types directly with container construction and content resolution. I will also try to find a way to use the Web Forms View Engine in a multi-tenant way we achieved with the Spark View Engine without using a virtual path provider. I will probably not use the Web Forms View Engine personally, but I’m sure some people would prefer using WebForms because of the maturity of the engine. As always, I love to take questions by email or on twitter. Suggestions are always welcome as well! (Oh, and here’s another link to the source code).

    Read the article

  • SQL – Migrate Database from SQL Server to NuoDB – A Quick Tutorial

    - by Pinal Dave
    Data is growing exponentially and every organization with growing data is thinking of next big innovation in the world of Big Data. Big data is a indeed a future for every organization at one point of the time. Just like every other next big thing, big data has its own challenges and issues. The biggest challenge associated with the big data is to find the ideal platform which supports the scalability and growth of the data. If you are a regular reader of this blog, you must be familiar with NuoDB. I have been working with NuoDB for a while and their recent release is the best thus far. NuoDB is an elastically scalable SQL database that can run on local host, datacenter and cloud-based resources. A key feature of the product is that it does not require sharding (read more here). Last week, I was able to install NuoDB in less than 90 seconds and have explored their Explorer and Admin sections. You can read about my experiences in these posts: SQL – Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database Many SQL Authority readers have been following me in my journey to evaluate NuoDB. One of the frequently asked questions I’ve received from you is if there is any way to migrate data from SQL Server to NuoDB. The fact is that there is indeed a way to do so and NuoDB provides a fantastic tool which can help users to do it. NuoDB Migrator is a command line utility that supports the migration of Microsoft SQL Server, MySQL, Oracle, and PostgreSQL schemas and data to NuoDB. The migration to NuoDB is a three-step process: NuoDB Migrator generates a schema for a target NuoDB database It loads data into the target NuoDB database It dumps data from the source database Let’s see how we can migrate our data from SQL Server to NuoDB using a simple three-step approach. But before we do that we will create a sample database in MSSQL and later we will migrate the same database to NuoDB: Setup Step 1: Build a sample data CREATE DATABASE [Test]; CREATE TABLE [Department]( [DepartmentID] [smallint] NOT NULL, [Name] VARCHAR(100) NOT NULL, [GroupName] VARCHAR(100) NOT NULL, [ModifiedDate] [datetime] NOT NULL, CONSTRAINT [PK_Department_DepartmentID] PRIMARY KEY CLUSTERED ( [DepartmentID] ASC ) ) ON [PRIMARY]; INSERT INTO Department SELECT * FROM AdventureWorks2012.HumanResources.Department; Note that I am using the SQL Server AdventureWorks database to build this sample table but you can build this sample table any way you prefer. Setup Step 2: Install Java 64 bit Before you can begin the migration process to NuoDB, make sure you have 64-bit Java installed on your computer. This is due to the fact that the NuoDB Migrator tool is built in Java. You can download 64-bit Java for Windows, Mac OSX, or Linux from the following link: http://java.com/en/download/manual.jsp. One more thing to remember is that you make sure that the path in your environment settings is set to your JAVA_HOME directory or else the tool will not work. Here is how you can do it: Go to My Computer >> Right Click >> Select Properties >> Click on Advanced System Settings >> Click on Environment Variables >> Click on New and enter the following values. Variable Name: JAVA_HOME Variable Value: C:\Program Files\Java\jre7 Make sure you enter your Java installation directory in the Variable Value field. Setup Step 3: Install JDBC driver for SQL Server. There are two JDBC drivers available for SQL Server.  Select the one you prefer to use by following one of the two links below: Microsoft JDBC Driver jTDS JDBC Driver In this example we will be using jTDS JDBC driver. Once you download the driver, move the driver to your NuoDB installation folder. In my case, I have moved the JAR file of the driver into the C:\Program Files\NuoDB\tools\migrator\jar folder as this is my NuoDB installation directory. Now we are all set to start the three-step migration process from SQL Server to NuoDB: Migration Step 1: NuoDB Schema Generation Here is the command I use to generate a schema of my SQL Server Database in NuoDB. First I go to the folder C:\Program Files\NuoDB\tools\migrator\bin and execute the nuodb-migrator.bat file. Note that my database name is ‘test’. Additionally my username and password is also ‘test’. You can see that my SQL Server database is running on my localhost on port 1433. Additionally, the schema of the table is ‘dbo’. nuodb-migrator schema –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.path=/tmp/schema.sql The above script will generate a schema of all my SQL Server tables and will put it in the folder C:\tmp\schema.sql . You can open the schema.sql file and execute this file directly in your NuoDB instance. You can follow the link here to see how you can execute the SQL script in NuoDB. Please note that if you have not yet created the schema in the NuoDB database, you should create it before executing this step. Step 2: Generate the Dump File of the Data Once you have recreated your schema in NuoDB from SQL Server, the next step is very easy. Here we create a CSV format dump file, which will contain all the data from all the tables from the SQL Server database. The command to do so is very similar to the above command. Be aware that this step may take a bit of time based on your database size. nuodb-migrator dump –source.driver=net.sourceforge.jtds.jdbc.Driver –source.url=jdbc:jtds:sqlserver://localhost:1433/ –source.username=test –source.password=test –source.catalog=test –source.schema=dbo –output.type=csv –output.path=/tmp/dump.cat Once the above command is successfully executed you can find your CSV file in the C:\tmp\ folder. However, you do not have to do anything manually. The third and final step will take care of completing the migration process. Migration Step 3: Load the Data into NuoDB After building schema and taking a dump of the data, the very next step is essential and crucial. It will take the CSV file and load it into the NuoDB database. nuodb-migrator load –target.url=jdbc:com.nuodb://localhost:48004/mytest –target.schema=dbo –target.username=test –target.password=test –input.path=/tmp/dump.cat Please note that in the above script we are now targeting the NuoDB database, which we have already created with the name of “MyTest”. If the database does not exist, create it manually before executing the above script. I have kept the username and password as “test”, but please make sure that you create a more secure password for your database for security reasons. Voila!  You’re Done That’s it. You are done. It took 3 setup and 3 migration steps to migrate your SQL Server database to NuoDB.  You can now start exploring the database and build excellent, scale-out applications. In this blog post, I have done my best to come up with simple and easy process, which you can follow to migrate your app from SQL Server to NuoDB. Download NuoDB I strongly encourage you to download NuoDB and go through my 3-step migration tutorial from SQL Server to NuoDB. Additionally here are two very important blog post from NuoDB CTO Seth Proctor. He has written excellent blog posts on the concept of the Administrative Domains. NuoDB has this concept of an Administrative Domain, which is a collection of hosts that can run one or multiple databases.  Each database has its own TEs and SMs, but all are managed within the Admin Console for that particular domain. http://www.nuodb.com/techblog/2013/03/11/getting-started-provisioning-a-domain/ http://www.nuodb.com/techblog/2013/03/14/getting-started-running-a-database/ Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • The DOS DEBUG Environment

    - by MarkPearl
    Today I thought I would go back in time and have a look at the DEBUG command that has been available since the beginning of dawn in DOS, MS-DOS and Microsoft Windows. up to today I always knew it was there, but had no clue on how to use it so for those that are interested this might be a great geek party trick to pull out when you want the awe the younger generation and want to show them what “real” programming is about. But wait, you will have to do it relatively quickly as it seems like DEBUG was finally dumped from the Windows group in Windows 7. Not to worry, pull out that Windows XP box which will get you even more geek points and you can still poke DEBUG a bit. So, for those that are interested and want to find out a bit about the history of DEBUG read the wiki link here. That all put aside, lets get our hands dirty.. How to Start DEBUG in Windows Make sure your version of Windows supports DEBUG. Open up a console window Make a directory where you want to play with debug – in my instance I called it C221 Enter the directory and type Debug You will get a response with a – as illustrated in the image below…   The commands available in DEBUG There are several commands available in DEBUG. The most common ones are A (Assemble) R (Register) T (Trace) G (Go) D (Dump or Display) U (Unassemble) E (Enter) P (Proceed) N (Name) L (Load) W (Write) H (Hexadecimal) I (Input) O (Output) Q (Quit) I am not going to cover all these commands, but what I will do is go through a few of them briefly. A is for Assemble Command (to write code) The A command translates assembly language statements into machine code. It is quite useful for writing small assembly programs. Below I have written a very basic assembly program. The code typed out is as follows mov ax,0015 mov cx,0023 sub cx,ax mov [120],al mov cl,[120]A nop R is for Register (to jump to a point in memory) The r command turns out to be one of the most frequent commands you will use in DEBUG. It allows you to view the contents of registers and to change their values. It can be used with the following combinations… R – Displays the contents of all the registers R f – Displays the flags register R register_name – Displays the contents of a specific register All three methods are illustrated in the image above T is for Trace (To execute a program step by step) The t command allows us to execute the program step by step. Before we can trace the program we need to point back to the beginning of the program. We do this by typing in r ip, which moves us back to memory point 100. We then type trace which executes the first line of code (line 100) (As shown in the image below starting from the red arrow). You can see from the above image that the register AX now contains 0015 as per our instruction mov ax,0015 You can also see that the IP points to line 0103 which has the MOV CX,0023 command If we type t again it will now execute the second line of the program which moves 23 in the cx register. Again, we can see that the line of code was executed and that the CX register now holds the value of 23. What I would like to highlight now is the section underlined in red. These are the status flags. The ones we are going to look at now are 1st (NV), 4th (PL), 5th (NZ) & 8th (NC) NV means no overflow, the alternate would be OV PL means that the sign of the previous arithmetic operation was Plus, the alternate would be NG (Negative) NZ means that the results of the previous arithmetic operation operation was Not Zero, the alternate would be ZR NC means that No final Carry resulted from the previous arithmetic operation. CY means that there was a final Carry. We could now follow this process of entering the t command until the entire program is executed line by line. G is for Go (To execute a program up to a certain line number) So we have looked at executing a program line by line, which is fine if your program is minuscule BUT totally unpractical if we have any decent sized program. A quicker way to run some lines of code is to use the G command. The ‘g’ command executes a program up to a certain specified point. It can be used in connection with the the reset IP command. You would set your initial point and then run the G command with the line you want to end on. P is for Proceed (Similar to trace but slightly more streamlined) Another command similar to trace is the proceed command. All that the p command does is if it is called and it encounters a CALL, INT or LOOP command it terminates the program execution. In the example below I modified our example program to include an int 20 at the end of it as illustrated in the image below… Then when executing the code when I encountered the int 20 command I typed the P command and the program terminated normally (illustrated below). D is for Dump (or for those more polite Display) So, we have all these assembly lines of code, but if you have ever opened up an exe or com file in a text/hex editor, it looks nothing like assembly code. The D command is a way that we can see what our code looks like in memory (or in a hex editor). If we examined the image above, we can see that Debug is storing our assembly code with each instruction following immediately after the previous one. For instance in memory address 110 we have int and 111 we have 20. If we examine the dump of memory we can see at memory point 110 CD is stored and at memory point 111 20 is stored. U is for Unassemble (or Convert Machine code to Assembly Code) So up to now we have gone through a bunch of commands, but probably one of the most useful is the U command. Let’s say we don’t understand machine code so well and so instead we want to see it in its equivalent assembly code. We can type the U command followed by the start memory point, followed by the end memory point and it will show us the assembly code equivalent of the machine code. E is for a bunch of things… The E command can be used for a bunch of things… One example is to enter data or machine code instructions directly into memory. It can also be used to display the contents of memory locations. I am not going to worry to much about it in this post. N / L / W is for Name, Load & Write So we have written out assembly code in debug, and now we want to save it to disk, or write it as a com file or load it. This is where the N, L & W command come in handy. The n command is used to give a name to the executable program file and is pretty simple to use. The w command is a bit trickier. It saves to disk all the memory between point bx and point cx so you need to specify the bx memory address and the cx memory address for it to write your code. Let’s look at an example illustrated below. You do this by calling the r command followed by the either bx or cx. We can then go to the directory where we were working and will see the new file with the name we specified. The L command is relatively simple. You would first specify the name of the file you would like to load using the N command, and then call the L command. Q is for Quit The last command that I am going to write about in this post is the Q command. Simply put, calling the Q command exits DEBUG. Commands we did not Cover Out of the standard DEBUG commands we covered A, T, G, D, U, E, P, R, N, L & W. The ones we did not cover were H, I & O – I might make mention of these in a later post, but for the basics they are not really needed. Some Useful Resources Please note this post is based on the COS2213 handouts for UNISA A Guide to DEBUG - http://mirror.href.com/thestarman/asm/debug/debug.htm#NT

    Read the article

  • How to change Matlab program for solving equation with finite element method?

    - by DSblizzard
    I don't know is this question more related to mathematics or programming and I'm absolute newbie in Matlab. Program FEM_50 applies the finite element method to Laplace's equation -Uxx(x, y) - Uyy(x, y) = F(x, y) in Omega. How to change it to apply FEM to equation -Uxx(x, y) - Uyy(x, y) + U(x, y) = F(x, y)? At this page: http://sc.fsu.edu/~burkardt/m_src/fem_50/fem_50.html additional code files in case you need them. function fem_50 ( ) %% FEM_50 applies the finite element method to Laplace's equation. % % Discussion: % % FEM_50 is a set of MATLAB routines to apply the finite % element method to solving Laplace's equation in an arbitrary % region, using about 50 lines of MATLAB code. % % FEM_50 is partly a demonstration, to show how little it % takes to implement the finite element method (at least using % every possible MATLAB shortcut.) The user supplies datafiles % that specify the geometry of the region and its arrangement % into triangular and quadrilateral elements, and the location % and type of the boundary conditions, which can be any mixture % of Neumann and Dirichlet. % % The unknown state variable U(x,y) is assumed to satisfy % Laplace's equation: % -Uxx(x,y) - Uyy(x,y) = F(x,y) in Omega % with Dirichlet boundary conditions % U(x,y) = U_D(x,y) on Gamma_D % and Neumann boundary conditions on the outward normal derivative: % Un(x,y) = G(x,y) on Gamma_N % If Gamma designates the boundary of the region Omega, % then we presume that % Gamma = Gamma_D + Gamma_N % but the user is free to determine which boundary conditions to % apply. Note, however, that the problem will generally be singular % unless at least one Dirichlet boundary condition is specified. % % The code uses piecewise linear basis functions for triangular elements, % and piecewise isoparametric bilinear basis functions for quadrilateral % elements. % % The user is required to supply a number of data files and MATLAB % functions that specify the location of nodes, the grouping of nodes % into elements, the location and value of boundary conditions, and % the right hand side function in Laplace's equation. Note that the % fact that the geometry is completely up to the user means that % just about any two dimensional region can be handled, with arbitrary % shape, including holes and islands. % clear % % Read the nodal coordinate data file. % load coordinates.dat; % % Read the triangular element data file. % load elements3.dat; % % Read the quadrilateral element data file. % load elements4.dat; % % Read the Neumann boundary condition data file. % I THINK the purpose of the EVAL command is to create an empty NEUMANN array % if no Neumann file is found. % eval ( 'load neumann.dat;', 'neumann=[];' ); % % Read the Dirichlet boundary condition data file. % load dirichlet.dat; A = sparse ( size(coordinates,1), size(coordinates,1) ); b = sparse ( size(coordinates,1), 1 ); % % Assembly. % for j = 1 : size(elements3,1) A(elements3(j,:),elements3(j,:)) = A(elements3(j,:),elements3(j,:)) ... + stima3(coordinates(elements3(j,:),:)); end for j = 1 : size(elements4,1) A(elements4(j,:),elements4(j,:)) = A(elements4(j,:),elements4(j,:)) ... + stima4(coordinates(elements4(j,:),:)); end % % Volume Forces. % for j = 1 : size(elements3,1) b(elements3(j,:)) = b(elements3(j,:)) ... + det( [1,1,1; coordinates(elements3(j,:),:)'] ) * ... f(sum(coordinates(elements3(j,:),:))/3)/6; end for j = 1 : size(elements4,1) b(elements4(j,:)) = b(elements4(j,:)) ... + det([1,1,1; coordinates(elements4(j,1:3),:)'] ) * ... f(sum(coordinates(elements4(j,:),:))/4)/4; end % % Neumann conditions. % if ( ~isempty(neumann) ) for j = 1 : size(neumann,1) b(neumann(j,:)) = b(neumann(j,:)) + ... norm(coordinates(neumann(j,1),:) - coordinates(neumann(j,2),:)) * ... g(sum(coordinates(neumann(j,:),:))/2)/2; end end % % Determine which nodes are associated with Dirichlet conditions. % Assign the corresponding entries of U, and adjust the right hand side. % u = sparse ( size(coordinates,1), 1 ); BoundNodes = unique ( dirichlet ); u(BoundNodes) = u_d ( coordinates(BoundNodes,:) ); b = b - A * u; % % Compute the solution by solving A * U = B for the remaining unknown values of U. % FreeNodes = setdiff ( 1:size(coordinates,1), BoundNodes ); u(FreeNodes) = A(FreeNodes,FreeNodes) \ b(FreeNodes); % % Graphic representation. % show ( elements3, elements4, coordinates, full ( u ) ); return end

    Read the article

  • apache mod_rewrite rule in httpd.conf for modifying some paths, but not others

    - by wallyk
    I'm having quite a challenge creating an appropriate rewrite rule for Apache/2.2.14 on Fedora 10. I'm working through the CodeIgniter-Doctrine tutorial which uses an .htaccess file. (Search for Removing “index.php” from CodeIgniter urls about 10% down.) But since that's not recommended for a production server, I'm trying to tweak it to work in /etc/httpd/conf/httpd.conf. <VirtualHost *:80> ServerName ci_doctrine DocumentRoot /var/www/html/ci_doctrine ErrorLog /var/log/httpd/cid-error_log CustomLog /var/log/httpd/cid-access_log common <IfModule mod_rewrite.c> RewriteEngine on RewriteLog /var/log/httpd/cid_rewrite RewriteLogLevel 9 # RewriteCond ^/css/style.css$ (these have bad syntax, but that's beside the point) # RewriteRule ^/css/style.css$ /css/style.css [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> <IfModule !mod_rewrite.c> ErrorDocument 404 /ci_doctrine/index.php </IfModule> </VirtualHost> It seems like the tutorial .htaccess rules properly test for existing files and then not alter the URL in such cases, but the rewrite log says that the conditions are true (that is, the file does not exist) even though it's there. 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) init rewrite engine with requested uri /login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (3) applying pattern '^(.*)$' to uri '/login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/login' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/login' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) rewrite '/login' -> '/index.php//login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) local path result: /index.php//login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (1) go-ahead with /var/www/html/ci_doctrine/index.php/login [OK] 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) init rewrite engine with requested uri /login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (3) applying pattern '^(.*)$' to uri '/login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/login' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/login' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) rewrite '/login' -> '/index.php//login' 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) local path result: /index.php//login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/login 127.0.0.1 - - [03/May/2010:23:26:56 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (1) go-ahead with /var/www/html/ci_doctrine/index.php/login [OK] 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) init rewrite engine with requested uri /css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (3) applying pattern '^(.*)$' to uri '/css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/css/style.css' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (4) RewriteCond: input='/css/style.css' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) rewrite '/css/style.css' -> '/index.php//css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) local path result: /index.php//css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#167e8e0/initial] (1) go-ahead with /var/www/html/ci_doctrine/index.php/css/style.css [OK] 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) init rewrite engine with requested uri /css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (3) applying pattern '^(.*)$' to uri '/css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/css/style.css' pattern='!-f' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (4) RewriteCond: input='/css/style.css' pattern='!-d' => matched 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) rewrite '/css/style.css' -> '/index.php//css/style.css' 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) local path result: /index.php//css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (2) prefixed with document_root to /var/www/html/ci_doctrine/index.php/css/style.css 127.0.0.1 - - [03/May/2010:23:26:58 --0700] [ci_doctrine/sid#13c1868][rid#16848f8/subreq] (1) go-ahead with /var/www/html/ci_doctrine/index.php/css/style.css [OK] The file .../css/style.css was working properly before I started messing around with the rewrite rules, so it should be in the right place. But now the path is always munged up by the rewriting, though the virtual components below index.php are properly translated. What am I doing wrong?

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • Need some help on how to replay the last game of a java maze game

    - by Marty
    Hello, I am working on creating a Java maze game for a project. The maze is displayed on the console as standard output not in an applet. I have created most of hte code I need, however I am stuck at one problem and that is I need a user to be able to replay the last game i.e redraw the maze with the users moves but without any input from the user. I am not sure on what course of action to take, i was thinking about copying each users move or the position of each move into another array, as you can see i have 2 variables which hold the position of the player, plyrX and plyrY do you think copying these values into a new array after each move would solve my problem and how would i go about this? I have updated my code, apologies about the textIO.java class not being present, not sure how to resolve that exept post a link to TextIO.java [TextIO.java][1] My code below is updated with a new array of type char to hold values from the original maze (read in from text file and displayed using unicode characters) and also to new variables c_plyrX and c_plyrY which I am thinking should hold the values of plyrX and plyrY and copy them into the new array. When I try to call the replayGame(); method from the menu the maze loads for a second then the console exits so im not sure what I am doing wrong Thanks public class MazeGame { //unicode characters that will define the maze walls, //pathways, and in game characters. final static char WALL = '\u2588'; //wall final static char PATH = '\u2591'; //pathway final static char PLAYER = '\u25EF'; //player final static char ENTRANCE = 'E'; //entrance final static char EXIT = '\u2716'; //exit //declaring member variables which will hold the maze co-ordinates //X = rows, Y = columns static int entX = 0; //entrance X co-ordinate static int entY = 1; //entrance y co-ordinate static int plyrX = 0; static int plyrY = 1; static int exitX = 24; //exit X co-ordinate static int exitY = 37; //exit Y co-ordinate //static member variables which hold maze values //used so values can be accessed from different methods static int rows; //rows variable static int cols; //columns variable static char[][] maze; //defines 2 dimensional array to hold the maze //variables that hold player movement values static char dir; //direction static int spaces; //amount of spaces user can travel //variable to hold amount of moves the user has taken; static int movesTaken = 0; //new array to hold player moves for replaying game static char[][] mazeCopy; static int c_plyrX; static int c_plyrY; /** userMenu method for displaying the user menu which will provide various options for * the user to choose such as play a maze game, get instructions, etc. */ public static void userMenu(){ TextIO.putln("Maze Game"); TextIO.putln("*********"); TextIO.putln("Choose an option."); TextIO.putln(""); TextIO.putln("1. Play the Maze Game."); TextIO.putln("2. View Instructions."); TextIO.putln("3. Replay the last game."); TextIO.putln("4. Exit the Maze Game."); TextIO.putln(""); int option; //variable for holding users option TextIO.put("Type your choice: "); option = TextIO.getlnInt(); //gets users option //switch statement for processing menu options switch(option){ case 1: playMazeGame(); case 2: instructions(); case 3: if (c_plyrX == plyrX && c_plyrY == plyrY)replayGame(); else { TextIO.putln("Option not available yet, you need to play a game first."); TextIO.putln(); userMenu(); } case 4: System.exit(0); //exits the user out of the console default: TextIO.put("Option must be 1, 2, 3 or 4"); } } //end of userMenu /**main method, will call the userMenu and get the users choice and call * the relevant method to execute the users choice. */ public static void main(String[]args){ userMenu(); //calls the userMenu method } //end of main method /**instructions method, displays instructions on how to play * the game to the user/ */ public static void instructions(){ TextIO.putln("To beat the Maze Game you have to move your character"); TextIO.putln("through the maze and reach the exit in as few moves as possible."); TextIO.putln(""); TextIO.putln("Your characer is displayed as a " + PLAYER); TextIO.putln("The maze exit is displayed as a " + EXIT); TextIO.putln("Reach the exit and you have won escaped the maze."); TextIO.putln("To control your character type the direction you want to go"); TextIO.putln("and how many spaces you want to move"); TextIO.putln("for example 'D3' will move your character"); TextIO.putln("down 3 spaces."); TextIO.putln("Remember you can't walk through walls!"); boolean insOption; //boolean variable TextIO.putln(""); TextIO.put("Do you want to play the Maze Game now? (Y or N) "); insOption = TextIO.getlnBoolean(); if (insOption == true)playMazeGame(); else userMenu(); } //end of instructions method /**playMazeGame method, calls the loadMaze method and the charMove method * to start playing the Maze Game. */ public static void playMazeGame(){ loadMaze(); plyrMoves(); } //end of playMazeGame method /**loadMaze method, loads the 39x25 maze from the MazeGame.txt text file * and inserts values from the text file into the maze array and * displays the maze on screen using the unicode block characters. * plyrX and plyrY variables are set at their staring co ordinates so that when * a game is completed and the user selects to play a new game * the player character will always be at position 01. */ public static void loadMaze(){ plyrX = 0; plyrY = 1; TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions maze = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ maze[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == plyrX && j == plyrY){ plyrX = i; plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (maze[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (maze[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (maze[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (maze[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end of loadMaze method /**redrawMaze method, method for redrawing the maze after each move. * */ public static void redrawMaze(){ TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions maze = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ maze[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == plyrX && j == plyrY){ plyrX = i; plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (maze[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (maze[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (maze[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (maze[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end redrawMaze method /**replay game method * */ public static void replayGame(){ c_plyrX = plyrX; c_plyrY = plyrY; TextIO.readFile("MazeGame.txt"); //now reads from the external MazeGame.txt file rows = TextIO.getInt(); //gets the number of rows from text file to create X dimensions cols = TextIO.getlnInt(); //gets number of columns from text file to create Y dimensions mazeCopy = new char[rows][cols]; //creates maze array of base type char with specified dimnensions //loop to process the array and read in values from the text file. for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ mazeCopy[i][j] = TextIO.getChar(); } TextIO.getln(); } //end for loop TextIO.readStandardInput(); //closes MazeGame.txt file and reads from //standard input. //loop to process the array values and display as unicode characters for (int i = 0; i<rows; i++){ for (int j = 0; j<cols; j++){ if (i == c_plyrX && j == c_plyrY){ c_plyrX = i; c_plyrY = j; TextIO.put(PLAYER); //puts the player character at player co-ords } else{ if (mazeCopy[i][j] == '0') TextIO.putf("%c",WALL); //puts wall block if (mazeCopy[i][j] == '1') TextIO.putf("%c",PATH); //puts path block if (mazeCopy[i][j] == '2') { entX = i; entY = j; TextIO.putf("%c",ENTRANCE); //puts entrance character } if (mazeCopy[i][j] == '3') { exitX = i; //holds value of exit exitY = j; //co-ordinates TextIO.putf("%c",EXIT); //puts exit character } } } TextIO.putln(); } //end for loop } //end replayGame method /**plyrMoves method, method for moving the players character * around the maze. */ public static void plyrMoves(){ int nplyrX = plyrX; int nplyrY = plyrY; int pMoves; direction(); //UP if (dir == 'U' || dir == 'u'){ nplyrX = plyrX; nplyrY = plyrY; for(pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again."); } else if (pMoves != spaces){ nplyrX =plyrX + 1; } else { plyrX = plyrX-spaces; c_plyrX = plyrX; movesTaken++; } } }//end UP if //DOWN if (dir == 'D' || dir == 'd'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves ++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again"); } else if (pMoves != spaces){ nplyrX = plyrX+1; } else{ plyrX = plyrX+spaces; c_plyrX = plyrX; movesTaken++; } } } //end DOWN if //LEFT if (dir == 'L' || dir =='l'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again"); } else if (pMoves != spaces){ nplyrY = plyrY + 1; } else{ plyrY = plyrY-spaces; c_plyrY = plyrY; movesTaken++; } } } //end LEFT if //RIGHT if (dir == 'R' || dir == 'r'){ nplyrX = plyrX; nplyrY = plyrY; for (pMoves = 0; pMoves <= spaces; pMoves++){ if (maze[nplyrX][nplyrY] == '0'){ TextIO.putln("Invalid move, try again."); } else if (pMoves != spaces){ nplyrY += 1; } else{ plyrY = plyrY+spaces; c_plyrY = plyrY; movesTaken++; } } } //end RIGHT if //prints message if player escapes from the maze. if (maze[plyrX][plyrY] == '3'){ TextIO.putln("****Congratulations****"); TextIO.putln(); TextIO.putln("You have escaped from the maze."); TextIO.putln(); userMenu(); } else{ movesTaken++; redrawMaze(); plyrMoves(); } } //end of plyrMoves method /**direction, method * */ public static char direction(){ TextIO.putln("Enter the direction you wish to move in and the distance"); TextIO.putln("i.e D3 = move down 3 spaces"); TextIO.putln("U - Up, D - Down, L - Left, R - Right: "); dir = TextIO.getChar(); if (dir =='U' || dir == 'D' || dir == 'L' || dir == 'R' || dir == 'u' || dir == 'd' || dir == 'l' || dir == 'r'){ spacesMoved(); } else{ loadMaze(); TextIO.putln("Invalid direction!"); TextIO.put("Direction must be one of U, D, L or R"); direction(); } return dir; //returns the value of dir (direction) } //end direction method /**spaces method, gets the amount of spaces the user wants to move * */ public static int spacesMoved(){ TextIO.putln(" "); spaces = TextIO.getlnInt(); if (spaces <= 0){ loadMaze(); TextIO.put("Invalid amount of spaces, try again"); spacesMoved(); } return spaces; } //end spacesMoved method } //end of MazeGame class

    Read the article

  • jQuery Toggle with Cookie

    - by Cameron
    I have the following toggle system, but I want it to remember what was open/closed using the jQuery cookie plugin. So for example if I open a toggle and then navigate away from the page, when I come back it should be still open. This is code I have so far, but it's becoming rather confusing, some help would be much appreciated thanks. jQuery.cookie = function (name, value, options) { if (typeof value != 'undefined') { options = options || {}; if (value === null) { value = ''; options = $.extend({}, options); options.expires = -1; } var expires = ''; if (options.expires && (typeof options.expires == 'number' || options.expires.toUTCString)) { var date; if (typeof options.expires == 'number') { date = new Date(); date.setTime(date.getTime() + (options.expires * 24 * 60 * 60 * 1000)); } else { date = options.expires; } expires = '; expires=' + date.toUTCString(); } var path = options.path ? '; path=' + (options.path) : ''; var domain = options.domain ? '; domain=' + (options.domain) : ''; var secure = options.secure ? '; secure' : ''; document.cookie = [name, '=', encodeURIComponent(value), expires, path, domain, secure].join(''); } else { var cookieValue = null; if (document.cookie && document.cookie != '') { var cookies = document.cookie.split(';'); for (var i = 0; i < cookies.length; i++) { var cookie = jQuery.trim(cookies[i]); if (cookie.substring(0, name.length + 1) == (name + '=')) { cookieValue = decodeURIComponent(cookie.substring(name.length + 1)); break; } } } return cookieValue; } }; // var showTop = $.cookie('showTop'); if ($.cookie('showTop') == 'collapsed') { $(".toggle_container").hide(); $(".trigger").toggle(function () { $(this).addClass("active"); }, function () { $(this).removeClass("active"); }); $(".trigger").click(function () { $(this).next(".toggle_container").slideToggle("slow,"); }); } else { $(".toggle_container").show(); $(".trigger").toggle(function () { $(this).addClass("active"); }, function () { $(this).removeClass("active"); }); $(".trigger").click(function () { $(this).next(".toggle_container").slideToggle("slow,"); }); }; $(".trigger").click(function () { if ($(".toggle_container").is(":hidden")) { $(this).next(".toggle_container").slideToggle("slow,"); $.cookie('showTop', 'expanded'); } else { $(this).next(".toggle_container").slideToggle("slow,"); $.cookie('showTop', 'collapsed'); } return false; }); and this is a snippet of the HTML it works with: <li> <label for="small"><input type="checkbox" id="small" /> Small</label> <a class="trigger" href="#">Toggle</a> <div class="toggle_container"> <p class="funding"><strong>Funding</strong></p> <ul class="childs"> <li class="child"> <label for="fully-funded1"><input type="checkbox" id="fully-funded1" /> Fully Funded</label> <a class="trigger" href="#">Toggle</a> <div class="toggle_container"> <p class="days"><strong>Days</strong></p> <ul class="days clearfix"> <li><label for="1pre16">Pre 16</label> <input type="text" id="1pre16" /></li> <li><label for="2post16">Post 16</label> <input type="text" id="2post16" /></li> <li><label for="3teacher">Teacher</label> <input type="text" id="3teacher" /></li> </ul> </div> </li>

    Read the article

  • OpenGL font rendering

    - by DEElekgolo
    I am trying to make an openGL text rendering class using FreeType. I was originally following this code but it doesn't seem to work out for me. I get nothing reguardless of what parameters I put for Draw(). class Font { public: Font() { if (FT_Init_FreeType(&ftLibrary)) { printf("Could not initialize FreeType library\n"); return; } glGenBuffers(1,&iVerts); } bool Load(std::string sFont, unsigned int Size = 12.0f) { if (FT_New_Face(ftLibrary,sFont.c_str(),0,&ftFace)) { printf("Could not open font: %s\n",sFont.c_str()); return true; } iSize = Size; FT_Set_Pixel_Sizes(ftFace,0,(int)iSize); FT_GlyphSlot gGlyph = ftFace->glyph; //Generating the texture atlas. //Rather than some amazing rectangular packing method, I'm just going //to have one long strip of letters with the height being that of the font size. int width = 0; int height = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { printf("Error rendering letter %c for font %s.\n",i,sFont.c_str()); } width += gGlyph->bitmap.width; height += std::max(height,gGlyph->bitmap.rows); } //Generate the openGL texture glActiveTexture(GL_TEXTURE0); //if I texture exists then delete it. iTexture ? glDeleteBuffers(1,&iTexture):0; glGenTextures(1,&iTexture); glBindTexture(GL_TEXTURE_2D,iTexture); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glTexImage2D(GL_TEXTURE_2D,0,GL_ALPHA,width,height,0,GL_ALPHA,GL_UNSIGNED_BYTE,0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //load the glyphs and set the glyph data int x = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { //if it cant load the character continue; } //load the glyph map into the texture glTexSubImage2D(GL_TEXTURE_2D,0,x,0, gGlyph->bitmap.width, gGlyph->bitmap.rows, GL_ALPHA, GL_UNSIGNED_BYTE, gGlyph->bitmap.buffer); //move the "pen" down the strip x += gGlyph->bitmap.width; chars[i].ax = (float)(gGlyph->advance.x >> 6); chars[i].ay = (float)(gGlyph->advance.y >> 6); chars[i].bw = (float)gGlyph->bitmap.width; chars[i].bh = (float)gGlyph->bitmap.rows; chars[i].bl = (float)gGlyph->bitmap_left; chars[i].bt = (float)gGlyph->bitmap_top; chars[i].tx = (float)x/width; } printf("Loaded font: %s\n",sFont.c_str()); return true; } void Draw(std::string sString,Vector2f vPos = Vector2f(0,0),Vector2f vScale = Vector2f(1,1)) { struct pPoint { pPoint() { x = y = s = t = 0; } pPoint(float a,float b,float c,float d) { x = a; y = b; s = c; t = d; } float x,y; float s,t; }; pPoint* cCoordinates = new pPoint[6*sString.length()]; int n = 0; for (const char *p = sString.c_str(); *p; p++) { float x2 = vPos.x() + chars[*p].bl * vScale.x(); float y2 = -vPos.y() - chars[*p].bt * vScale.y(); float w = chars[*p].bw * vScale.x(); float h = chars[*p].bh * vScale.y(); float x = vPos.x() + chars[*p].ax * vScale.x(); float y = vPos.y() + chars[*p].ay * vScale.y(); //skip characters with no pixels //still advances though if (!w || !h) { continue; } //triangle one cCoordinates[n++] = pPoint( x2 , -y2 , chars[*p].tx , 0); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2-h , chars[*p].tx + chars[*p].bw / w , chars[*p].bh / h); } glBindBuffer(GL_ARRAY_BUFFER,iVerts); glBindBuffer(GL_TEXTURE_2D,iTexture); //Vertices glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].x); //TexCoord 0 glClientActiveTexture(GL_TEXTURE0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].s); glCullFace(GL_NONE); glBufferData(GL_ARRAY_BUFFER,6*sString.length(),cCoordinates,GL_DYNAMIC_DRAW); glDrawArrays(GL_TRIANGLES,0,n); glCullFace(GL_BACK); glBindBuffer(GL_ARRAY_BUFFER,0); glBindBuffer(GL_TEXTURE_2D,0); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } ~Font() { glDeleteBuffers(1,&iVerts); glDeleteBuffers(1,&iTexture); } private: unsigned int iSize; //openGL texture atlas unsigned int iTexture; //openGL geometry buffer; unsigned int iVerts; FT_Library ftLibrary; FT_Face ftFace; struct Character { float ax,ay;//Advance float bw,bh;//bitmap size float bl,bt;//bitmap left and top float tx; } chars[128]; };

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >