Search Results

Search found 17852 results on 715 pages for 'load balancer'.

Page 174/715 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • R Package Installation with Oracle R Enterprise

    - by Sherry LaMonica-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE Programming languages give developers the opportunity to write reusable functions and to bundle those functions into logical deployable entities. In R, these are called packages. R has thousands of such packages provided by an almost equally large group of third-party contributors. To allow others to benefit from these packages, users can share packages on the CRAN system for use by the vast R development community worldwide. R's package system along with the CRAN framework provides a process for authoring, documenting and distributing packages to millions of users. In this post, we'll illustrate the various ways in which such R packages can be installed for use with R and together with Oracle R Enterprise. In the following, the same instructions apply when using either open source R or Oracle R Distribution. In this post, we cover the following package installation scenarios for: R command line Linux shell command line Use with Oracle R Enterprise Installation on Exadata or RAC Installing all packages in a CRAN Task View Troubleshooting common errors 1. R Package Installation BasicsR package installation basics are outlined in Chapter 6 of the R Installation and Administration Guide. There are two ways to install packages from the command line: from the R command line and from the shell command line. For this first example on Oracle Linux using Oracle R Distribution, we’ll install the arules package as root so that packages will be installed in the default R system-wide location where all users can access it, /usr/lib64/R/library.Within R, using the install.packages function always attempts to install the latest version of the requested package available on CRAN:R> install.packages("arules")If the arules package depends upon other packages that are not already installed locally, the R installer automatically downloads and installs those required packages. This is a huge benefit that frees users from the task of identifying and resolving those dependencies.You can also install R from the shell command line. This is useful for some packages when an internet connection is not available or for installing packages not uploaded to CRAN. To install packages this way, first locate the package on CRAN and then download the package source to your local machine. For example:$ wget http://cran.r-project.org/src/contrib/arules_1.1-2.tar.gz Then, install the package using the command R CMD INSTALL:$ R CMD INSTALL arules_1.1-2.tar.gzA major difference between installing R packages using the R package installer at the R command line and shell command line is that package dependencies must be resolved manually at the shell command line. Package dependencies are listed in the Depends section of the package’s CRAN site. If dependencies are not identified and installed prior to the package’s installation, you will see an error similar to:ERROR: dependency ‘xxx’ is not available for package ‘yyy’As a best practice and to save time, always refer to the package’s CRAN site to understand the package dependencies prior to attempting an installation. If you don’t run R as root, you won’t have permission to write packages into the default system-wide location and you will be prompted to create a personal library accessible by your userid. You can accept the personal library path chosen by R, or specify the library location by passing parameters to the install.packages function. For example, to create an R package repository in your home directory: R> install.packages("arules", lib="/home/username/Rpackages")or$ R CMD INSTALL arules_1.1-2.tar.gz --library=/home/username/RpackagesRefer to the install.packages help file in R or execute R CMD INSTALL --help at the shell command line for a full list of command line options.To set the library location and avoid having to specify this at every package install, simply create the R startup environment file .Renviron in your home area if it does not already exist, and add the following piece of code to it:R_LIBS_USER = "/home/username/Rpackages" 2. Setting the RepositoryEach time you install an R package from the R command line, you are asked which CRAN mirror, or server, R should use. To set the repository and avoid having to specify this during every package installation, create the R startup command file .Rprofile in your home directory and add the following R code to it:cat("Setting Seattle repository")r = getOption("repos") r["CRAN"] = "http://cran.fhcrc.org/"options(repos = r)rm(r) This code snippet sets the R package repository to the Seattle CRAN mirror at the start of each R session. 3. Installing R Packages for use with Oracle R EnterpriseEmbedded R execution with Oracle R Enterprise allows the use of CRAN or other third-party R packages in user-defined R functions executed on the Oracle Database server. The steps for installing and configuring packages for use with Oracle R Enterprise are the same as for open source R. The database-side R engine just needs to know where to find the R packages.The Oracle R Enterprise installation is performed by user oracle, which typically does not have write permission to the default site-wide library, /usr/lib64/R/library. On Linux and UNIX platforms, the Oracle R Enterprise Server installation provides the ORE script, which is executed from the operating system shell to install R packages and to start R. The ORE script is a wrapper for the default R script, a shell wrapper for the R executable. It can be used to start R, run batch scripts, and build or install R packages. Unlike the default R script, the ORE script installs packages to a location writable by user oracle and accessible by all ORE users - $ORACLE_HOME/R/library.To install a package on the database server so that it can be used by any R user and for use in embedded R execution, an Oracle DBA would typically download the package source from CRAN using wget. If the package depends on any packages that are not in the R distribution in use, download the sources for those packages, also.  For a single Oracle Database instance, replace the R script with ORE to install the packages in the same location as the Oracle R Enterprise packages. $ wget http://cran.r-project.org/src/contrib/arules_1.1-2.tar.gz$ ORE CMD INSTALL arules_1.1-2.tar.gzBehind the scenes, the ORE script performs the equivalent of setting R_LIBS_USER to the value of $ORACLE_HOME/R/library, and all R packages installed with the ORE script are installed to this location. For installing a package on multiple database servers, such as those in an Oracle Real Application Clusters (Oracle RAC) or a multinode Oracle Exadata Database Machine environment, use the ORE script in conjunction with the Exadata Distributed Command Line Interface (DCLI) utility.$ dcli -g nodes -l oracle ORE CMD INSTALL arules_1.1-1.tar.gz The DCLI -g flag designates a file containing a list of nodes to install on, and the -l flag specifies the user id to use when executing the commands. For more information on using DCLI with Oracle R Enterprise, see Chapter 5 in the Oracle R Enterprise Installation Guide.If you are using an Oracle R Enterprise client, install the package the same as any R package, bearing in mind that you must install the same version of the package on both the client and server machines to avoid incompatibilities. 4. CRAN Task ViewsCRAN also maintains a set of Task Views that identify packages associated with a particular task or methodology. Task Views are helpful in guiding users through the huge set of available R packages. They are actively maintained by volunteers who include detailed annotations for routines and packages. If you find one of the task views is a perfect match, you can install every package in that view using the ctv package - an R package for automating package installation. To use the ctv package to install a task view, first, install and load the ctv package.R> install.packages("ctv")R> library(ctv)Then query the names of the available task views and install the view you choose.R> available.views() R> install.views("TimeSeries") 5. Using and Managing R packages To use a package, start up R and load packages one at a time with the library command.Load the arules package in your R session. R> library(arules)Verify the version of arules installed.R> packageVersion("arules")[1] '1.1.2'Verify the version of arules installed on the database server using embedded R execution.R> ore.doEval(function() packageVersion("arules"))View the help file for the apropos function in the arules packageR> ?aproposOver time, your package repository will contain more and more packages, especially if you are using the system-wide repository where others are adding additional packages. It’s good to know the entire set of R packages accessible in your environment. To list all available packages in your local R session, use the installed.packages command:R> myLocalPackages <- row.names(installed.packages())R> myLocalPackagesTo access the list of available packages on the ORE database server from the ORE client, use the following embedded R syntax: R> myServerPackages <- ore.doEval(function() row.names(installed.packages()) R> myServerPackages 6. Troubleshooting Common ProblemsInstalling Older Versions of R packagesIf you immediately upgrade to the latest version of R, you will have no problem installing the most recent versions of R packages. However, if your version of R is older, some of the more recent package releases will not work and install.packages will generate a message such as: Warning message: In install.packages("arules") : package ‘arules’ is not availableThis is when you have to go to the Old sources link on the CRAN page for the arules package and determine which version is compatible with your version of R.Begin by determining what version of R you are using:$ R --versionOracle Distribution of R version 3.0.1 (--) -- "Good Sport" Copyright (C) The R Foundation for Statistical Computing Platform: x86_64-unknown-linux-gnu (64-bit)Given that R-3.0.1 was released May 16, 2013, any version of the arules package released after this date may work. Scanning the arules archive, we might try installing version 0.1.1-1, released in January of 2014:$ wget http://cran.r-project.org/src/contrib/Archive/arules/arules_1.1-1.tar.gz$ R CMD INSTALL arules_1.1-1.tar.gzFor use with ORE:$ ORE CMD INSTALL arules_1.1-1.tar.gzThe "package not available" error can also be thrown if the package you’re trying to install lives elsewhere, either another R package site, or it’s been removed from CRAN. A quick Google search usually leads to more information on the package’s location and status.Oracle R Enterprise is not in the R library pathOn Linux hosts, after installing the ORE server components, starting R, and attempting to load the ORE packages, you may receive the error:R> library(ORE)Error in library(ORE) : there is no package called ‘ORE’If you know the ORE packages have been installed and you receive this error, this is the result of not starting R with the ORE script. To resolve this problem, exit R and restart using the ORE script. After restarting R and ">running the command to load the ORE packages, you should not receive any errors.$ ORER> library(ORE)On Windows servers, the solution is to make the location of the ORE packages visible to R by adding them to the R library paths. To accomplish this, exit R, then add the following lines to the .Rprofile file. On Windows, the .Rprofile file is located in R\etc directory C:\Program Files\R\R-<version>\etc. Add the following lines:.libPaths("<path to $ORACLE_HOME>/R/library")The above line will tell R to include the R directory in the Oracle home as part of its search path. When you start R, the path above will be included, and future R package installations will also be saved to $ORACLE_HOME/R/library. This path should be writable by the user oracle, or the userid for the DBA tasked with installing R packages.Binary package compiled with different version of RBy default, R will install pre-compiled versions of packages if they are found. If the version of R under which the package was compiled does not match your installed version of R you will get an error message:Warning message: package ‘xxx’ was built under R version 3.0.0The solution is to download the package source and build it for your version of R.$ wget http://cran.r-project.org/src/contrib/Archive/arules/arules_1.1-1.tar.gz$ R CMD INSTALL arules_1.1-1.tar.gzFor use with ORE:$ ORE CMD INSTALL arules_1.1-1.tar.gzUnable to execute files in /tmp directoryBy default, R uses the /tmp directory to install packages. On security conscious machines, the /tmp directory is often marked as "noexec" in the /etc/fstab file. This means that no file under /tmp can ever be executed, and users who attempt to install R package will receive an error:ERROR: 'configure' exists but is not executable -- see the 'R Installation and Administration Manual’The solution is to set the TMP and TMPDIR environment variables to a location which R will use as the compilation directory. For example:$ mkdir <some path>/tmp$ export TMPDIR= <some path>/tmp$ export TMP= <some path>/tmpThis error typically appears on Linux client machines and not database servers, as Oracle Database writes to the value of the TMP environment variable for several tasks, including holding temporary files during database installation. 7. Creating your own R packageCreating your own package and submitting to CRAN is for advanced users, but it is not difficult. The procedure to follow, along with details of R's package system, is detailed in the Writing R Extensions manual.

    Read the article

  • Magento hosting on a budget

    - by spa
    I have to do a setup for Magento. My constraint is primarily ease of setup and fault tolerance/fail over. Furthermore costs are an issue. I have three identical physical servers to get the job done. Each server node has an i7 quad core, 16GB RAM, and 2x3TB HD in a software RAID 1 configuration. Each node runs Ubuntu 12.04. right now. I have an additional IP address which can be routed to any of these nodes. The Magento shop has max. 1000 products, 50% of it are bundle products. I would estimate that max. 100 users are active at once. This leads me to the conclusion, that performance is not top priority here. My first setup idea One node (lb) runs nginx as a load balancer. The additional IP is used with domain name and routed to this node by default. Nginx distributes the load equally to the other two nodes (shop1, shop2). Shop1 and shop2 are configured equally: each server runs Apache2 and MySQL. The Mysqls are configured with master/slave replication. My failover strategy: Lb fails = Route IP to shop1 (MySQL master), continue. Shop1 fails = Lb will handle that automatically, promote MySQL slave on shop2 to master, reconfigure Magento to use shop2 for writes, continue. Shop2 fails = Lb will handle that automatically, continue. Is this a sane strategy? Has anyone done a similar setup with Magento? My second setup idea Another way to do it would be to use drbd for storing the MySQL data files on shop1 and shop2. I understand that in this scenario only one node/MySQL instance can be active and the other is used as hot standby. So in case shop1 fails, I would start up MySQL on shop2, route the IP to shop2, and continue. I like that as the MySQL setup is easier and the nodes can be configured 99% identical. So in this case the load balancer becomes useless and I have a spare server. My third setup idea The third way might be master-master replication of MySQL databases. However, in my optinion this might be tricky, as Magento isn't build for this scenario (e.g. conflicting ids for new rows). I would not do that until I have heard of a working example. Could you give me an advice which route to follow? There seems not one "good" way to do it. E.g. I read blog posts which describe a MySQL master/slave setup for Magento, but elsewhere I read, that data might get duplicated when the slave lags behind the master (e.g. when an order is placed, a customer might get created twice). I'm kind of lost here.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Connecting tomcat6 to apache2

    - by StudentKen
    Disclaimier: Not a server admin I've been scratching my head over this for weeks now (not consistently mind you, as that would be maddening). I've been trying to connect my apache2 server to my tomcat server to the point where if someone encounters *.jsp or any servelet in navigating my web directory, it's handed over to tomcat. I have both Apache2.0 (port 9099) and Tomcat6 (9089) running on Debian lenny on the same box. Currently, mod_jk is enabled with mod_jk.conf in $apacheHOME/mods-enabled/ with content: # Where to find workers.properties JkWorkersFile /etc/apache2/workers.properties # Where to put jk shared memory JkShmFile /var/log/at_jk/mod_jk.shm # Where to put jk logs JkLogFile /var/log/at_jk/mod_jk.log # Set the jk log level [debug/error/info] JkLogLevel info # Select the timestamp log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " # Send servlet for context /examples to worker named worker1 JkMount /*/servlet/* worker1 # Send JSPs for context /examples to worker named worker1 JkMount /*.jsp worker1 my workers.properties located in $apacheHOME/ with content: workers.tomcat_home=/var/lib/tomcat6 workers.java_home=/usr/lib/jdk1.6.0_23/db/ worker.list=worker1 ps=/ worker.worker1.port=9071 worker.worker1.host=localhost worker.worker1.type=ajp13 my web.xml in $tomcatHOME/conf has the following servlets enabled <servlet> <servlet-name>default</servlet-name> <servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-cla$ <init-param> <param-name>debug</param-name> <param-value>0</param-value> </init-param> <init-param> <param-name>listings</param-name> <param-value>false</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>jsp</servlet-name> <servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class> <init-param> <param-name>fork</param-name> <param-value>false</param-value> </init-param> <init-param> <param-name>xpoweredBy</param-name> <param-value>false</param-value> </init-param> <load-on-startup>3</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jsp</servlet-name> <url-pattern>*.jsp</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config> From what I can tell, there's no funny buisness as both the apache2, tomcat, and mod_jk logs show green; yet whenever I navigate to a jsp, it simply displays the javascript. I'm unsure what the problem is exactly despite pouring over the logs and documentation for aid. I'm quite a greenhorn in the servelet world.

    Read the article

  • Creating a new instance, C#

    - by Dave Voyles
    This sounds like a very n00b question, but bear with me here: I'm trying to access the position of my bat (paddle) in my pong game and use it in my ball class. I'm doing this because I want a particle effect to go off at the point of contact where the ball hits the bat. Each time the ball hits the bat, I receive an error stating that I haven't created an instance of the bat. I understand that I have to (or can use a static class), but I'm not sure of how to do so in this example. I've included both my Bat and Ball classes. namespace Pong { #region Using Statements using System; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; #endregion public class Ball { #region Fields private readonly Random rand; private readonly Texture2D texture; private readonly SoundEffect warp; private double direction; private bool isVisible; private float moveSpeed; private Vector2 position; private Vector2 resetPos; private Rectangle size; private float speed; private bool isResetting; private bool collided; private Vector2 oldPos; private ParticleEngine particleEngine; private ContentManager contentManager; private SpriteBatch spriteBatch; private bool hasHitBat; private AIBat aiBat; private Bat bat; #endregion #region Constructors and Destructors /// <summary> /// Constructor for the ball /// </summary> public Ball(ContentManager contentManager, Vector2 ScreenSize) { moveSpeed = 15f; speed = 0; texture = contentManager.Load<Texture2D>(@"gfx/balls/redBall"); direction = 0; size = new Rectangle(0, 0, texture.Width, texture.Height); resetPos = new Vector2(ScreenSize.X / 2, ScreenSize.Y / 2); position = resetPos; rand = new Random(); isVisible = true; hasHitBat = false; // Everything to do with particles List<Texture2D> textures = new List<Texture2D>(); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/circle")); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/star")); textures.Add(contentManager.Load<Texture2D>(@"gfx/particle/diamond")); particleEngine = new ParticleEngine(textures, new Vector2()); } #endregion #region Public Methods and Operators /// <summary> /// Checks for the collision between the bat and the ball. Sends ball in the appropriate /// direction /// </summary> public void BatHit(int block) { if (direction > Math.PI * 1.5f || direction < Math.PI * 0.5f) { hasHitBat = true; particleEngine.EmitterLocation = new Vector2(aiBat.Position.X, aiBat.Position.Y); switch (block) { case 1: direction = MathHelper.ToRadians(200); break; case 2: direction = MathHelper.ToRadians(195); break; case 3: direction = MathHelper.ToRadians(180); break; case 4: direction = MathHelper.ToRadians(180); break; case 5: direction = MathHelper.ToRadians(165); break; } } else { hasHitBat = true; particleEngine.EmitterLocation = new Vector2(bat.Position.X, bat.Position.Y); switch (block) { case 1: direction = MathHelper.ToRadians(310); break; case 2: direction = MathHelper.ToRadians(345); break; case 3: direction = MathHelper.ToRadians(0); break; case 4: direction = MathHelper.ToRadians(15); break; case 5: direction = MathHelper.ToRadians(50); break; } } if (rand.Next(2) == 0) { direction += MathHelper.ToRadians(rand.Next(3)); } else { direction -= MathHelper.ToRadians(rand.Next(3)); } AudioManager.Instance.PlaySoundEffect("hit"); } /// <summary> /// JEP - added method to slow down ball after powerup deactivates /// </summary> public void DecreaseSpeed() { moveSpeed -= 0.6f; } /// <summary> /// Draws the ball on the screen /// </summary> public void Draw(SpriteBatch spriteBatch) { if (isVisible) { spriteBatch.Begin(); spriteBatch.Draw(texture, size, Color.White); spriteBatch.End(); // Draws sprites for particles when contact is made particleEngine.Draw(spriteBatch); } } /// <summary> /// Checks for the current direction of the ball /// </summary> public double GetDirection() { return direction; } /// <summary> /// Checks for the current position of the ball /// </summary> public Vector2 GetPosition() { return position; } /// <summary> /// Checks for the current size of the ball (for the powerups) /// </summary> public Rectangle GetSize() { return size; } /// <summary> /// Grows the size of the ball when the GrowBall powerup is used. /// </summary> public void GrowBall() { size = new Rectangle(0, 0, texture.Width * 2, texture.Height * 2); } /// <summary> /// Was used to increased the speed of the ball after each point is scored. /// No longer used, but am considering implementing again. /// </summary> public void IncreaseSpeed() { moveSpeed += 0.6f; } /// <summary> /// Check for the ball to return normal size after the Powerup has expired /// </summary> public void NormalBallSize() { size = new Rectangle(0, 0, texture.Width, texture.Height); } /// <summary> /// Check for the ball to return normal speed after the Powerup has expired /// </summary> public void NormalSpeed() { moveSpeed += 15f; } /// <summary> /// Checks to see if ball went out of bounds, and triggers warp sfx /// </summary> public void OutOfBounds() { // Checks if the player is still alive or not if (isResetting) { AudioManager.Instance.PlaySoundEffect("warp"); { // Used to stop the the issue where the ball hit sfx kept going off when detecting collison isResetting = false; AudioManager.Instance.Dispose(); } } } /// <summary> /// Speed for the ball when Speedball powerup is activated /// </summary> public void PowerupSpeed() { moveSpeed += 20.0f; } /// <summary> /// Check for where to reset the ball after each point is scored /// </summary> public void Reset(bool left) { if (left) { direction = 0; } else { direction = Math.PI; } // Used to stop the the issue where the ball hit sfx kept going off when detecting collison isResetting = true; position = resetPos; // Resets the ball to the center of the screen isVisible = true; speed = 15f; // Returns the ball back to the default speed, in case the speedBall was active if (rand.Next(2) == 0) { direction += MathHelper.ToRadians(rand.Next(30)); } else { direction -= MathHelper.ToRadians(rand.Next(30)); } } /// <summary> /// Shrinks the ball when the ShrinkBall powerup is activated /// </summary> public void ShrinkBall() { size = new Rectangle(0, 0, texture.Width / 2, texture.Height / 2); } /// <summary> /// Stops the ball each time it is reset. Ex: Between points / rounds /// </summary> public void Stop() { isVisible = true; speed = 0; } /// <summary> /// Updates position of the ball /// </summary> public void UpdatePosition() { size.X = (int)position.X; size.Y = (int)position.Y; oldPos.X = position.X; oldPos.Y = position.Y; position.X += speed * (float)Math.Cos(direction); position.Y += speed * (float)Math.Sin(direction); bool collided = CheckWallHit(); particleEngine.Update(); // Stops the issue where ball was oscillating on the ceiling or floor if (collided) { position.X = oldPos.X + speed * (float)Math.Cos(direction); position.Y = oldPos.Y + speed * (float)Math.Sin(direction); } } #endregion #region Methods /// <summary> /// Checks for collision with the ceiling or floor. 2*Math.pi = 360 degrees /// </summary> private bool CheckWallHit() { while (direction > 2 * Math.PI) { direction -= 2 * Math.PI; return true; } while (direction < 0) { direction += 2 * Math.PI; return true; } if (position.Y <= 0 || (position.Y > resetPos.Y * 2 - size.Height)) { direction = 2 * Math.PI - direction; return true; } return true; } #endregion } } namespace Pong { using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using System; public class Bat { public Vector2 Position; public float moveSpeed; public Rectangle size; private int points; private int yHeight; private Texture2D leftBat; public float turbo; public float recharge; public float interval; public bool isTurbo; /// <summary> /// Constructor for the bat /// </summary> public Bat(ContentManager contentManager, Vector2 screenSize, bool side) { moveSpeed = 7f; turbo = 15f; recharge = 100f; points = 0; interval = 5f; leftBat = contentManager.Load<Texture2D>(@"gfx/bats/batGrey"); size = new Rectangle(0, 0, leftBat.Width, leftBat.Height); // True means left bat, false means right bat. if (side) Position = new Vector2(30, screenSize.Y / 2 - size.Height / 2); else Position = new Vector2(screenSize.X - 30, screenSize.Y / 2 - size.Height / 2); yHeight = (int)screenSize.Y; } public void IncreaseSpeed() { moveSpeed += .5f; } /// <summary> /// The speed of the bat when Turbo is activated /// </summary> public void Turbo() { moveSpeed += 8.0f; } /// <summary> /// Returns the speed of the bat back to normal after Turbo is deactivated /// </summary> public void DisableTurbo() { moveSpeed = 7.0f; isTurbo = false; } /// <summary> /// Returns the bat to the nrmal size after the Grow/Shrink powerup has expired /// </summary> public void NormalSize() { size = new Rectangle(0, 0, leftBat.Width, leftBat.Height); } /// <summary> /// Checks for the size of the bat /// </summary> public Rectangle GetSize() { return size; } /// <summary> /// Adds point to the player or the AI after scoring. Currently Disabled. /// </summary> public void IncrementPoints() { points++; } /// <summary> /// Checks for the number of points at the moment /// </summary> public int GetPoints() { return points; } /// <summary> /// Sets thedefault starting position for the bats /// </summary> /// <param name="position"></param> public void SetPosition(Vector2 position) { if (position.Y < 0) { position.Y = 0; } if (position.Y > yHeight - size.Height) { position.Y = yHeight - size.Height; } this.Position = position; } /// <summary> /// Checks for the current position of the bat /// </summary> public Vector2 GetPosition() { return Position; } /// <summary> /// Controls the bat moving up the screen /// </summary> public void MoveUp() { SetPosition(Position + new Vector2(0, -moveSpeed)); } /// <summary> /// Controls the bat moving down the screen /// </summary> public void MoveDown() { SetPosition(Position + new Vector2(0, moveSpeed)); } /// <summary> /// Updates the position of the AI bat, in order to track the ball /// </summary> /// <param name="ball"></param> public virtual void UpdatePosition(Ball ball) { size.X = (int)Position.X; size.Y = (int)Position.Y; } /// <summary> /// Resets the bat to the center location after a new game starts /// </summary> public void ResetPosition() { SetPosition(new Vector2(GetPosition().X, yHeight / 2 - size.Height)); } /// <summary> /// Used for the Growbat powerup /// </summary> public void GrowBat() { // Doubles the size of the bat collision size = new Rectangle(0, 0, leftBat.Width * 2, leftBat.Height * 2); } /// <summary> /// Used for the Shrinkbat powerup /// </summary> public void ShrinkBat() { // 1/2 the size of the bat collision size = new Rectangle(0, 0, leftBat.Width / 2, leftBat.Height / 2); } /// <summary> /// Draws the bats /// </summary> public virtual void Draw(SpriteBatch batch) { batch.Draw(leftBat, size, Color.White); } } }

    Read the article

  • Multiple data centers and HTTP traffic: DNS Round Robin is the ONLY way to assure instant fail-over?

    - by vmiazzo
    Hi, Multiple A records pointing to the same domain seem to be used almost exclusively to implement DNS Round Robin as a cheap load balancing technique. The usual warning against DNS RR is that it is not good for high availability. When 1 IP goes down clients will continue to use it for minutes. A load balancer is often suggested as a better choice. Both claims are not completely true: When the traffic is HTTP then, most of the HTML browsers are able to automatically try the next A record if the previous is down, without a new DNS look-up. Read here chapter 3.1 and here. When multiple data centers are involved then, DNS RR is the only option to distribute traffic across them. So, is it true that, with multiple data centers and HTTP traffic, the use of DNS RR is the ONLY way to assure instant fail-over when one data center goes down? Thanks, Valentino Edit: Off course each data center has a local Load Balancer with hot spare. It's OK to sacrifice session affinity for an instant fail-over. AFAIK the only way for a DNS to suggest a data center instead of another is to reply with just the IP (or IPs) associated to that data center. If the data center becomes unreachable then all those IP are also unreachables. This means that, even if smart HTML browsers are able to instantly try another A record , all the attempts will fail until the local cache entry expires and a new DNS lookup is done, fetching the new working IPs (I assume DNS automatically suggests to a new data center when one fail). So, "smart DNS" cannot assure instant fail-over. Conversely a DNS round-robin permits it. When one data center fail, the smart HTML browsers (most of them) instantly try the other cached A records jumping to another (working) data center. So, DNS round-robin doesn't assure session affinity or the lowest RTT but seems to be the only way to assure instant fail-over when the clients are "smart" HTML browsers. Edit 2: Some people suggest TCP Anycast as a definitive solution. In this paper (chapter 6) is explained that Anycast fail-over is related to BGP convergence. For this reason Anycast can employ from 15 minutes to 20 seconds to complete. 20 seconds are possible on networks where the topology was optimized for this. Probably just CDN operators can grant such fast fail-overs. Edit 3:* I did some DNS look-ups and traceroutes (maybe some expert can double check) and: The only CDN using TCP Anycast seems to be CacheFly, other operators like CDN networks and BitGravity use CacheFly. Seems that their edges cannot be used as reverse proxies. Therefore, they cannot be used to grant instant failover. Akamai and LimeLight seems to use geo-aware DNS. But! They return multiple A records. From traceroutes seems that the returned IPs are on the same data center. So, I'm puzzled on how they can offer a 100% SLA when one data center goes down.

    Read the article

  • Working with Tile Notifications in Windows 8 Store Apps – Part I

    - by dwahlin
    One of the features that really makes Windows 8 apps stand out from others is the tile functionality on the start screen. While icons allow a user to start an application, tiles provide a more engaging way to engage the user and draw them into an application. Examples of “live” tiles on part of my current start screen are shown next: I’ll admit that if you get enough of these tiles going the start screen can actually be a bit distracting. Fortunately, a user can easily disable a live tile by right-clicking on it or pressing and holding a tile on a touch device and then selecting Turn live tile off from the AppBar: The can also make a wide tile smaller (into a square tile) or make a square tile bigger assuming the application supports both squares and rectangles. In this post I’ll walk through how to add tile notification functionality into an application. Both XAML/C# and HTML/JavaScript apps support live tiles and I’ll show the code for both options.   Understanding Tile Templates The first thing you need to know if you want to add custom tile functionality (live tiles) into your application is that there is a collection of tile templates available out-of-the-box. Each tile template has XML associated with it that you need to load, update with your custom data, and then feed into a tile update manager. By doing that you can control what shows in your app’s tile on the Windows 8 start screen. So how do you learn more about the different tile templates and their respective XML? Fortunately, Microsoft has a nice documentation page in the Windows 8 Store SDK. Visit http://msdn.microsoft.com/en-us/library/windows/apps/hh761491.aspx to see a complete list of square and wide/rectangular tile templates that you can use. Looking through the templates you’ll It has the following XML template associated with it:  <tile> <visual> <binding template="TileSquareBlock"> <text id="1">Text Field 1</text> <text id="2">Text Field 2</text> </binding> </visual> </tile> An example of a wide/rectangular tile template is shown next:    <tile> <visual> <binding template="TileWideImageAndText01"> <image id="1" src="image1.png" alt="alt text"/> <text id="1">Text Field 1</text> </binding> </visual> </tile>   To use these tile templates (or others you find interesting), update their content, and get them to show for your app’s tile on the Windows 8 start screen you’ll need to perform the following steps: Define the tile template to use in your app Load the tile template’s XML into memory Modify the children of the <binding> tag Feed the modified tile XML into a new TileNotification instance Feed the TileNotification instance into the Update() method of the TileUpdateManager In the remainder of the post I’ll walk through each of the steps listed above to provide wide and square tile notifications for an application. The wide tile that’s shown will show an image and text while the square tile will only show text. If you’re going to provide custom tile notifications it’s recommended that you provide wide and square tiles since users can switch between the two of them directly on the start screen. Note: When working with tile notifications it’s possible to manipulate and update a tile’s XML template without having to know XML parsing techniques. This can be accomplished using some C# notification extension classes that are available. In this post I’m going to focus on working with tile notifications using an XML parser so that the focus is on the steps required to add notifications to the Windows 8 start screen rather than on external extension classes. You can access the extension classes in the Windows 8 samples gallery if you’re interested.   Steps to Create Custom App Tile Notifications   Step 1: Define the tile template to use in your app Although you can cut-and-paste a tile template’s XML directly into your C# or HTML/JavaScript Windows store app and then parse it using an XML parser, it’s easier to use the built-in TileTemplateType enumeration from the Windows.UI.Notifications namespace. It provides direct access to the XML for the various templates so once you locate a template you like in the documentation (mentioned above), simplify reference it:HTML/JavaScript var notifications = Windows.UI.Notifications; var template = notifications.TileTemplateType.tileWideImageAndText01; .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C# var template = TileTemplateType.TileWideImageAndText01;   Step 2: Load the tile template’s XML into memory Once the target template’s XML is identified, load it into memory using the TileUpdateManager’s GetTemplateContent() method. This method parses the template XML and returns an XmlDocument object:   HTML/JavaScript   var tileXml = notifications.TileUpdateManager.getTemplateContent(template); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#  var tileXml = TileUpdateManager.GetTemplateContent(template);   Step 3: Modify the children of the <binding> tag Once the XML for a given template is loaded into memory you need to locate the appropriate <image> and/or <text> elements in the XML and update them with your app data. This can be done using standard XML DOM manipulation techniques. The example code below locates the image folder and loads the path to an image file located in the project into it’s inner text. The code also creates a square tile that consists of text, updates it’s <text> element, and then imports and appends it into the wide tile’s XML.   HTML/JavaScript var image = tileXml.selectSingleNode('//image[@id="1"]'); image.setAttribute('src', 'ms-appx:///images/' + imageFile); image.setAttribute('alt', 'Live Tile'); var squareTemplate = notifications.TileTemplateType.tileSquareText04; var squareTileXml = notifications.TileUpdateManager.getTemplateContent(squareTemplate); var squareTileTextAttributes = squareTileXml.selectSingleNode('//text[@id="1"]'); squareTileTextAttributes.appendChild(squareTileXml.createTextNode(content)); var node = tileXml.importNode(squareTileXml.selectSingleNode('//binding'), true); tileXml.selectSingleNode('//visual').appendChild(node); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#var tileXml = TileUpdateManager.GetTemplateContent(template); var text = tileXml.SelectSingleNode("//text[@id='1']"); text.AppendChild(tileXml.CreateTextNode(content)); var image = (XmlElement)tileXml.SelectSingleNode("//image[@id='1']"); image.SetAttribute("src", "ms-appx:///Assets/" + imageFile); image.SetAttribute("alt", "Live Tile"); Debug.WriteLine(image.GetXml()); var squareTemplate = TileTemplateType.TileSquareText04; var squareTileXml = TileUpdateManager.GetTemplateContent(squareTemplate); var squareTileTextAttributes = squareTileXml.SelectSingleNode("//text[@id='1']"); squareTileTextAttributes.AppendChild(squareTileXml.CreateTextNode(content)); var node = tileXml.ImportNode(squareTileXml.SelectSingleNode("//binding"), true); tileXml.SelectSingleNode("//visual").AppendChild(node);  Step 4: Feed the modified tile XML into a new TileNotification instance Now that the XML data has been updated with the desired text and images, it’s time to load the XmlDocument object into a new TileNotification instance:   HTML/JavaScript var tileNotification = new notifications.TileNotification(tileXml); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#var tileNotification = new TileNotification(tileXml);  Step 5: Feed the TileNotification instance into the Update() method of the TileUpdateManager Once the TileNotification instance has been created and the XmlDocument has been passed to its constructor, it needs to be passed to the Update() method of a TileUpdator in order to be shown on the Windows 8 start screen:   HTML/JavaScript notifications.TileUpdateManager.createTileUpdaterForApplication().update(tileNotification); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   XAML/C#TileUpdateManager.CreateTileUpdaterForApplication().Update(tileNotification);    Once the tile notification is updated it’ll show up on the start screen. An example of the wide and square tiles created with the included demo code are shown next:     Download the HTML/JavaScript and XAML/C# sample application here. In the next post in this series I’ll walk through how to queue multiple tiles and clear a queue.

    Read the article

  • Connecting SceneBuilder edited FXML to Java code

    - by daniel
    Recently I had to answer several questions regarding how to connect an UI built with the JavaFX SceneBuilder 1.0 Developer Preview to Java Code. So I figured out that a short overview might be helpful. But first, let me state the obvious. What is FXML? To make it short, FXML is an XML based declaration format for JavaFX. JavaFX provides an FXML loader which will parse FXML files and from that construct a graph of Java object. It may sound complex when stated like that but it is actually quite simple. Here is an example of FXML file, which instantiate a StackPane and puts a Button inside it: -- <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane prefHeight="150.0" prefWidth="200.0" xmlns:fx="http://javafx.com/fxml"> <children> <Button mnemonicParsing="false" text="Button" /> </children> </StackPane> ... and here is the code I would have had to write if I had chosen to do the same thing programatically: import javafx.scene.control.*; import javafx.scene.layout.*; ... final Button button = new Button("Button"); button.setMnemonicParsing(false); final StackPane stackPane = new StackPane(); stackPane.setPrefWidth(200.0); stackPane.setPrefHeight(150.0); stacPane.getChildren().add(button); As you can see - FXML is rather simple to understand - as it is quite close to the JavaFX API. So OK FXML is simple, but why would I use it?Well, there are several answers to that - but my own favorite is: because you can make it with SceneBuilder. What is SceneBuilder? In short SceneBuilder is a layout tool that will let you graphically build JavaFX user interfaces by dragging and dropping JavaFX components from a library, and save it as an FXML file. SceneBuilder can also be used to load and modify JavaFX scenegraphs declared in FXML. Here is how I made the small FXML file above: Start the JavaFX SceneBuilder 1.0 Developer Preview In the Library on the left hand side, click on 'StackPane' and drag it on the content view (the white rectangle) In the Library, select a Button and drag it onto the StackPane on the content view. In the Hierarchy Panel on the left hand side - select the StackPane component, then invoke 'Edit > Trim To Selected' from the menubar That's it - you can now save, and you will obtain the small FXML file shown above. Of course this is only a trivial sample, made for the sake of the example - and SceneBuilder will let you create much more complex UIs. So, I have now an FXML file. But what do I do with it? How do I include it in my program? How do I write my main class? Loading an FXML file with JavaFX Well, that's the easy part - because the piece of code you need to write never changes. You can download and look at the SceneBuilder samples if you need to get convinced, but here is the short version: Create a Java class (let's call it 'Main.java') which extends javafx.application.Application In the same directory copy/save the FXML file you just created using SceneBuilder. Let's name it "simple.fxml" Now here is the Java code for the Main class, which simply loads the FXML file and puts it as root in a stage's scene. /* * Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. */ package simple; import java.util.logging.Level; import java.util.logging.Logger; import javafx.application.Application; import javafx.fxml.FXMLLoader; import javafx.scene.Scene; import javafx.scene.layout.StackPane; import javafx.stage.Stage; public class Main extends Application { /** * @param args the command line arguments */ public static void main(String[] args) { Application.launch(Main.class, (java.lang.String[])null); } @Override public void start(Stage primaryStage) { try { StackPane page = (StackPane) FXMLLoader.load(Main.class.getResource("simple.fxml")); Scene scene = new Scene(page); primaryStage.setScene(scene); primaryStage.setTitle("FXML is Simple"); primaryStage.show(); } catch (Exception ex) { Logger.getLogger(Main.class.getName()).log(Level.SEVERE, null, ex); } } } Great! Now I only have to use my favorite IDE to compile the class and run it. But... wait... what does it do? Well nothing. It just displays a button in the middle of a window. There's no logic attached to it. So how do we do that? How can I connect this button to my application logic? Here is how: Connection to code First let's define our application logic. Since this post is only intended to give a very brief overview - let's keep things simple. Let's say that the only thing I want to do is print a message on System.out when the user clicks on my button. To do that, I'll need to register an action handler with my button. And to do that, I'll need to somehow get a handle on my button. I'll need some kind of controller logic that will get my button and add my action handler to it. So how do I get a handle to my button and pass it to my controller? Once again - this is easy: I just need to write a controller class for my FXML. With each FXML file, it is possible to associate a controller class defined for that FXML. That controller class will make the link between the UI (the objects defined in the FXML) and the application logic. To each object defined in FXML we can associate an fx:id. The value of the id must be unique within the scope of the FXML, and is the name of an instance variable inside the controller class, in which the object will be injected. Since I want to have access to my button, I will need to add an fx:id to my button in FXML, and declare an @FXML variable in my controller class with the same name. In other words - I will need to add fx:id="myButton" to my button in FXML: -- <Button fx:id="myButton" mnemonicParsing="false" text="Button" /> and declare @FXML private Button myButton in my controller class @FXML private Button myButton; // value will be injected by the FXMLLoader Let's see how to do this. Add an fx:id to the Button object Load "simple.fxml" in SceneBuilder - if not already done In the hierarchy panel (bottom left), or directly on the content view, select the Button object. Open the Properties sections of the inspector (right panel) for the button object At the top of the section, you will see a text field labelled fx:id. Enter myButton in that field and validate. Associate a controller class with the FXML file Still in SceneBuilder, select the top root object (in our case, that's the StackPane), and open the Code section of the inspector (right hand side) At the top of the section you should see a text field labelled Controller Class. In the field, type simple.SimpleController. This is the name of the class we're going to create manually. If you save at this point, the FXML will look like this: -- <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane prefHeight="150.0" prefWidth="200.0" xmlns:fx="http://javafx.com/fxml" fx:controller="simple.SimpleController"> <children> <Button fx:id="myButton" mnemonicParsing="false" text="Button" /> </children> </StackPane> As you can see, the name of the controller class has been added to the root object: fx:controller="simple.SimpleController" Coding the controller class In your favorite IDE, create an empty SimpleController.java class. Now what does a controller class looks like? What should we put inside? Well - SceneBuilder will help you there: it will show you an example of controller skeleton tailored for your FXML. In the menu bar, invoke View > Show Sample Controller Skeleton. A popup appears, displaying a suggestion for the controller skeleton: copy the code displayed there, and paste it into your SimpleController.java: /** * Sample Skeleton for "simple.fxml" Controller Class * Use copy/paste to copy paste this code into your favorite IDE **/ package simple; import java.net.URL; import java.util.ResourceBundle; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.control.Button; public class SimpleController implements Initializable { @FXML // fx:id="myButton" private Button myButton; // Value injected by FXMLLoader @Override // This method is called by the FXMLLoader when initialization is complete public void initialize(URL fxmlFileLocation, ResourceBundle resources) { assert myButton != null : "fx:id=\"myButton\" was not injected: check your FXML file 'simple.fxml'."; // initialize your logic here: all @FXML variables will have been injected } } Note that the code displayed by SceneBuilder is there only for educational purpose: SceneBuilder does not create and does not modify Java files. This is simply a hint of what you can use, given the fx:id present in your FXML file. You are free to copy all or part of the displayed code and paste it into your own Java class. Now at this point, there only remains to add our logic to the controller class. Quite easy: in the initialize method, I will register an action handler with my button: () { @Override public void handle(ActionEvent event) { System.out.println("That was easy, wasn't it?"); } }); ... -- ... // initialize your logic here: all @FXML variables will have been injected myButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { System.out.println("That was easy, wasn't it?"); } }); ... That's it - if you now compile everything in your IDE, and run your application, clicking on the button should print a message on the console! Summary What happens is that in Main.java, the FXMLLoader will load simple.fxml from the jar/classpath, as specified by 'FXMLLoader.load(Main.class.getResource("simple.fxml"))'. When loading simple.fxml, the loader will find the name of the controller class, as specified by 'fx:controller="simple.SimpleController"' in the FXML. Upon finding the name of the controller class, the loader will create an instance of that class, in which it will try to inject all the objects that have an fx:id in the FXML. Thus, after having created '<Button fx:id="myButton" ... />', the FXMLLoader will inject the button instance into the '@FXML private Button myButton;' instance variable found on the controller instance. This is because The instance variable has an @FXML annotation, The name of the variable exactly matches the value of the fx:id Finally, when the whole FXML has been loaded, the FXMLLoader will call the controller's initialize method, and our code that registers an action handler with the button will be executed. For a complete example, take a look at the HelloWorld SceneBuilder sample. Also make sure to follow the SceneBuilder Get Started guide, which will guide you through a much more complete example. Of course, there are more elegant ways to set up an Event Handler using FXML and SceneBuilder. There are also many different ways to work with the FXMLLoader. But since it's starting to be very late here, I think it will have to wait for another post. I hope you have enjoyed the tour! --daniel

    Read the article

  • Nashorn in the Twitterverse, Continued

    - by jlaskey
    After doing the Twitter example, it seemed reasonable to try graphing the result with JavaFX.  At this time the Nashorn project doesn't have an JavaFX shell, so we have to go through some hoops to create an JavaFX application.  I thought showing you some of those hoops might give you some idea about what you can do mixing Nashorn and Java (we'll add a JavaFX shell to the todo list.) First, let's look at the meat of the application.  Here is the repackaged version of the original twitter example. var twitter4j      = Packages.twitter4j; var TwitterFactory = twitter4j.TwitterFactory; var Query          = twitter4j.Query; function getTrendingData() {     var twitter = new TwitterFactory().instance;     var query   = new Query("nashorn OR nashornjs");     query.since("2012-11-21");     query.count = 100;     var data = {};     do {         var result = twitter.search(query);         var tweets = result.tweets;         for each (tweet in tweets) {             var date = tweet.createdAt;             var key = (1900 + date.year) + "/" +                       (1 + date.month) + "/" +                       date.date;             data[key] = (data[key] || 0) + 1;         }     } while (query = result.nextQuery());     return data; } Instead of just printing out tweets, getTrendingData tallies "tweets per date" during the sample period (since "2012-11-21", the date "New Project: Nashorn" was posted.)   getTrendingData then returns the resulting tally object. Next, use JavaFX BarChart to display that data. var javafx         = Packages.javafx; var Stage          = javafx.stage.Stage var Scene          = javafx.scene.Scene; var Group          = javafx.scene.Group; var Chart          = javafx.scene.chart.Chart; var FXCollections  = javafx.collections.FXCollections; var ObservableList = javafx.collections.ObservableList; var CategoryAxis   = javafx.scene.chart.CategoryAxis; var NumberAxis     = javafx.scene.chart.NumberAxis; var BarChart       = javafx.scene.chart.BarChart; var XYChart        = javafx.scene.chart.XYChart; var Series         = XYChart.Series; var Data           = XYChart.Data; function graph(stage, data) {     var root = new Group();     stage.scene = new Scene(root);     var dates = Object.keys(data);     var xAxis = new CategoryAxis();     xAxis.categories = FXCollections.observableArrayList(dates);     var yAxis = new NumberAxis("Tweets", 0.0, 200.0, 50.0);     var series = FXCollections.observableArrayList();     for (var date in data) {         series.add(new Data(date, data[date]));     }     var tweets = new Series("Tweets", series);     var barChartData = FXCollections.observableArrayList(tweets);     var chart = new BarChart(xAxis, yAxis, barChartData, 25.0);     root.children.add(chart); } I should point out that there is a lot of subtlety going on in the background.  For example; stage.scene = new Scene(root) is equivalent to stage.setScene(new Scene(root)). If Nashorn can't find a property (scene), then it searches (via Dynalink) for the Java Beans equivalent (setScene.)  Also note, that Nashorn is magically handling the generic class FXCollections.  Finally,  with the call to observableArrayList(dates), Nashorn is automatically converting the JavaScript array dates to a Java collection.  It really is hard to identify which objects are JavaScript and which are Java.  Does it really matter? Okay, with the meat out of the way, let's talk about the hoops. When working with JavaFX, you start with a main subclass of javafx.application.Application.  This class handles the initialization of the JavaFX libraries and the event processing.  This is what I used for this example; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import javafx.application.Application; import javafx.stage.Stage; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; import javax.script.ScriptException; public class TrendingMain extends Application { private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); private Trending trending; public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) throws Exception { trending = (Trending) load("Trending.js"); trending.start(stage); } @Override public void stop() throws Exception { trending.stop(); } private Object load(String script) throws IOException, ScriptException { try (final InputStream is = TrendingMain.class.getResourceAsStream(script)) { return engine.eval(new InputStreamReader(is, "utf-8")); } } } To initialize Nashorn, we use JSR-223's javax.script.  private static final ScriptEngineManager MANAGER = new ScriptEngineManager(); private final ScriptEngine engine = MANAGER.getEngineByName("nashorn"); This code sets up an instance of the Nashorn engine for evaluating scripts. The  load method reads a script into memory and then gets engine to eval that script.  Note, that load also returns the result of the eval. Now for the fun part.  There are several different approaches we could use to communicate between the Java main and the script.  In this example we'll use a Java interface.  The JavaFX main needs to do at least start and stop, so the following will suffice as an interface; public interface Trending {     public void start(Stage stage) throws Exception;     public void stop() throws Exception; } At the end of the example's script we add; (function newTrending() {     return new Packages.Trending() {         start: function(stage) {             var data = getTrendingData();             graph(stage, data);             stage.show();         },         stop: function() {         }     } })(); which instantiates a new subclass instance of Trending and overrides the start and stop methods.  The result of this function call is what is returned to main via the eval. trending = (Trending) load("Trending.js"); To recap, the script Trending.js contains functions getTrendingData, graph and newTrending, plus the call at the end to newTrending.  Back in the Java code, we cast the result of the eval (call to newTrending) to Trending, thus, we end up with an object that we can then use to call back into the script.  trending.start(stage); Voila. ?

    Read the article

  • VPS 512 MB RAM with WordPressMU comes to consumes lots of memory

    - by CAPitalZ
    I have googled for days and gathered all optimization suggestions and tried. My sites are not getting any high hits. May be like 100 hits per day [all my sites combined]. Here are my specs I have 512 MB RAM VPS with burstable 1024 MB. Centos 5 32-bit & cPanel/WHM Apache 2.2 MySQL 5.0 PHP 5.3.2 Here is my Configs I have 2 WordPressMU production sites, and 1 test site my.cnf # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/lib/mysql/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking skip-bdb skip-innodb key_buffer = 16M max_allowed_packet = 1M table_cache = 64 sort_buffer_size = 512K net_buffer_length = 8K read_buffer_size = 256K read_rnd_buffer_size = 512K myisam_sort_buffer_size = 8M #CAPitalZ thread_cache_size=8 thread_concurrency=4 #query_cache_type=1 #query_cache_limit=1M query_cache_size=16M concurrent_insert=2 low_priority_updates=1 max_connections=50 tmp_table_size=16M max_heap_table_size=16M join_buffer_size=1M interactive_timeout=25 wait_timeout=1000 #connect_timout=10 not able to restart mysql max_connect_errors=10 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # skip-networking # Disable Federated by default skip-federated # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 [mysqld_safe] open_files_limit=8192 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 20M sort_buffer_size = 20M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout httpd.conf I have unselected many modules and recompiled using EasyApache in WHM. Only have the following modules built Deflate Expires Fileprotect Imagemap MPM Prefork Version [default] EAccelerator for PHP Bcmath Calendar CurlSSL [I'm using Curl. But I don't have any https sites] Expat GD [for image cropping] Gettext Imap Mbregex [default] Mbstring [need both Mbregex and Mbstring for utf-8] Mysql of the system MySQL "Improved" extension. Sockets TTF (FreeType) [I'm using custom font] Zlib Under Global Configuration I only have FollowSymLinks enabled I Have TraceEnable, ServerSignature, FileETag OFF ServerTokens ProductOnly DirectoryIndex Priority has index.php as the first one I have removed Clamd [Clam Anti-virus] SpamAssasin is Off Under Tweak Settings Default catch-all/default address behavior for new accounts. This is set to "fail" All stats programs turned off I have eAccelerator installed and checked in phpinfo and its working [Pre VirtualHost Include under WHM] Timeout 20 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 3 MinSpareServers 1 MaxSpareServers 3 StartServers 1 ServerLimit 50 MaxClients 50 MaxRequestsPerChild 4000 ExtendedStatus Off #ServerType standalone this throws error HostnameLookups Off <Directory "/"> AllowOverride None </Directory> My sites will take ages to load and WHM/CPanel will not even load. adadaa.com/ http://adadaa.net/ kadais.ca/ My average memory consumption is like 1000 MB! [yes always bursting] The process that consumes most CPU and also most memory is mysql But I also get like 15 httpd processes [when its bursting] I already got warning from cpuwatchcheck saying "While processing, the cpu has been maxed out for more than a 6 hour period. The current load/uptime line on the server at the time of this email is 07:00:37 up 11:30, 0 users, load average: 14.64, 16.79, 20.07" I don't know, I have tried switching these config values many different times, but nothing seems to work. Please show some light... Thanks

    Read the article

  • Alternate way to create a clone of a UNIX System

    - by Spirit
    THE STORY: (If you don't like to read much, down below is the question :) ) Where I work we have two HP RP2470 servers same hardware same number of hard drives same everything :). One of them is a production server and runs HP-UX 11.00. The poor ba***rd hasn't been turned off for years and now I have to make a clone of it on the other server - just in case, for redundancy. The problem is simple (or not simple) as I have to make the the other server exactly the same. However the old version of OS (UX 11.00 is a history now) and the old software running on it, have made my task almost impossible. On the production server there is also a cloning/recover utility Ignite-UX. I tried many times to create a recovery tape with it. Then when I load the tape on the backup server, it succeeds with the loading of the tape (no errors no warnings) but on the next restart it fails to load the OS :S and drops into HP`s ISL prompt. --- THE QUESTION: Is there an alternate way to create a clone of the Unix System? The environment is: 1. 2x HP RP2470 Servers (non-Intel), same hardware, same number od HDDs (two each of them) same everything. 2. OS running: HP-UX 11.00 The production server has to be cloned without downtime - sadly :( as I hope that they will reconsider on this one For example (like on Windows platforms), if you try to copy an entire HDD with Windows inside on another HDD, and then put that HDD on another PC it will still work, as long as the hardware is the same. Can I do something like that with a Unix system? Can I somehow COPY the contents of the entire HDD, put those on another HDD, and then just load the HDD into the other server? (If you haven't read the story the servers are exactly the same) Will it work? Can it be done with ordinary commands like cp or dump or something like that? Does any one have a similar experience? --- UPDATE: 26.01.2012 NOTE: The update is related to "The Story". If you haven't read that part then you can skip this update. This is just a short update on the recover logs from the Ignite Tape.. someone with more exp. might notice something.. ... --- READING CONTENTS OF THE IGNITE TAPE --- --- OUTPUT OMITED --- ... ... x ./configure3, 413696 bytes, 808 tape blocks x ./monitor_bpr, 20480 bytes, 40 tape blocks * Download_mini-system: Complete * Loading_software: Begin * Installing boot area on disk. * Enabling swap areas. * Backing up LVM configuration for "vg00". * Processing the archive source (Recovery Archive). * Wed Jan 25 15:27:32 EST 2012: Starting archive load of the source (Recovery Archive). * Positioning the tape (/dev/rmt/0mn). * Archive extraction from tape is beginning. Please wait. * Wed Jan 25 15:39:52 EST 2012: Completed archive load of the source (Recovery Archive). * Executing user specified script: "/opt/ignite/data/scripts/os_arch_post_l". * Running in recovery mode (os_arch_post_l). * Running the ioinit command ("/sbin/ioinit -c") * Creating device files via the insf command. insf: Installing special files for sdisk instance 0 address 0/0/1/1.15.0 insf: Installing special files for sdisk instance 1 address 0/0/2/0.1.0 insf: Installing special files for sdisk instance 2 address 0/0/2/1.15.0 insf: Installing special files for stape instance 0 address 0/0/1/0.3.0 insf: Installing special files for btlan instance 0 address 0/0/0/0 insf: Installing special files for btlan instance 1 address 0/2/0/0 insf: Installing special files for pseudo driver dlpi insf: Installing special files for pseudo driver kepd insf: Installing special files for pseudo driver framebuf insf: Installing special files for pseudo driver sad * Running "/opt/upgrade/bin/tlinstall -v" and correcting transition link permissions. * Constructing the bootconf file. * Setting primary boot path to "0/0/1/1.15.0". * Executing: "/var/adm/sw/products/PHSS_20146/pfiles/iux_postload". * Executing: "/var/adm/sw/products/PHSS_25982/pfiles/iux_postload". NOTE: tlinstall is searching filesystem - please be patient NOTE: Successfully completed * Loading_software: Complete * Build_Kernel: Begin NOTE: Since the /stand/vmunix kernel is already in place, the kernel will not be re-built. Note that no mod_kernel directives will be processed. * Build_Kernel: Complete * Boot_From_Client_Disk: Begin * Rebooting machine as expected. NOTE: Rebooting system. sync'ing disks (0 buffers to flush): 0 buffers not flushed 0 buffers still dirty Closing open logical volumes... Done Console reset done. Boot device reset done. ********** VIRTUAL FRONT PANEL ********** System Boot detected ***************************************** LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF OFF ON ON LED State: Running non-OS code. (i.e. Boot or Diagnostics) ... ... ... --- SERVER IS PERFORMING POST SEQUENCE HERE --- --- OUTPUT OMITED --- ... ... ... ***************************************** ************ EARLY BOOT VFP ************* End of early boot detected ***************************************** Firmware Version 43.50 Duplex Console IO Dependent Code (IODC) revision 1 ------------------------------------------------------------------------------ (c) Copyright 1995-2002, Hewlett-Packard Company, All rights reserved ------------------------------------------------------------------------------ Processor Speed State CoProcessor State Cache Size Number State Inst Data --------- -------- --------------------- ----------------- ------------ 0 650 MHz Active Functional 750 KB 1.5 MB 1 650 MHz Idle Functional 750 KB 1.5 MB Central Bus Speed (in MHz) : 120 Available Memory : 2097152 KB Good Memory Required : 16140 KB Primary boot path: 0/0/1/1.15 Alternate boot path: 0/0/2/1.15 Console path: 0/0/4/1.643 Keyboard path: 0/0/4/0.0 Processor is starting autoboot process. To discontinue, press any key within 10 seconds. 10 seconds expired. Proceeding... Trying Primary Boot Path ------------------------ Booting... Boot IO Dependent Code (IODC) revision 1 HARD Booted. ISL Revision A.00.38 OCT 26, 1994 ISL booting hpux ISL>

    Read the article

  • Error on 64 Bit Install of IIS &ndash; LoadLibraryEx failed on aspnet_filter.dll

    - by Rick Strahl
    I’ve been having a few problems with my Windows 7 install and trying to get IIS applications to run properly in 64 bit. After installing IIS and creating virtual directories for several of my applications and firing them up I was left with the following error message from IIS: Calling LoadLibraryEx on ISAPI filter “c:\windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll” failed This is on Windows 7 64 bit and running on an ASP.NET 4.0 Application configured for running 64 bit (32 bit disabled). It’s also on what is essentially a brand new installation of IIS and Windows 7. So it failed right out of the box. The problem here is that IIS is trying to loading this ISAPI filter from the 32 bit folder – it should be loading from Framework64 folder note the Framework folder. The aspnet_filter.dll component is a small Win32 ISAPI filter used to back up the cookieless session state for ASP.NET on IIS 7 applications. It’s not terribly important because of this focus, but it’s a default loaded component. After a lot of fiddling I ended up with two solutions (with the help and support of some Twitter folks): Switch IIS to run in 32 bit mode Fix the filter listing in ApplicationHost.config Switching IIS to allow 32 Bit Code This is a quick fix for the problem above which enables 32 bit code in the Application Pool. The problem above is that IIS is trying to load a 32 bit ISAPI filter and enabling 32 bit code gets you around this problem. To configure your Application Pool, open the Application Pool in IIS Manager bring up Advanced Options and Enable 32 Bit Applications: And voila the error message above goes away. Fix Filters Enabling 32 bit code is a quick fix solution to this problem, but not an ideal one. If you’re running a pure .NET application that doesn’t need to do COM or pInvoke Interop with 32 bit apps there’s usually no need for enabling 32 bit code in an Application Pool as you can run in native 64 bit code. So trying to get 64 bit working natively is a pretty key feature in my opinion :-) So what’s the problem – why is IIS trying to load a 32 bit DLL in a 64 bit install, especially if the application pool is configured to not allow 32 bit code at all? The problem lies in the server configuration and the fact that 32 bit and 64 bit configuration settings exist side by side in IIS. If I open my Default Web Site (or any other root Web Site) and go to the ISAPI filter list here’s what I see: Notice that there are 3 entries for ASP.NET 4.0 in this list. Only two of them however are specifically scoped to the specifically to 32 bit or 64 bit. As you can see the 64 bit filter correctly points at the Framework64 folder to load the dll, while both the 32 bit and the ‘generic’ entry point at the plain Framework 32 bit folder. Aha! Hence lies our problem. You can edit ApplicationHost.config manually, but I ran into the nasty issue of not being able to easily edit that file with the 32 bit editor (who ever thought that was a good idea???? WTF). You have to open ApplicationHost.Config in a 64 bit native text editor – which Visual Studio is not. Or my favorite editor: EditPad Pro. Since I don’t have a native 64 bit editor handy Notepad was my only choice. Or as an alternative you can use the IIS 7.5 Configuration Editor which lets you interactively browse and edit most ApplicationHost settings. You can drill into the configuration hierarchy visually to find your keys and edit attributes and sub values in property editor type interface. I had no idea this tool existed prior to today and it’s pretty cool as it gives you some visual clues to options available – especially in absence of an Intellisense scheme you’d get in Visual Studio (which doesn’t work). To use the Configuration Editor go the Web Site root and use the Configuration Editor option in the Management Group. Drill into System.webServer/isapiFilters and then click on the Collection’s … button on the right. You should now see a display like this: which shows all the same attributes you’d see in ApplicationHost.config (cool!). These entries correspond to these raw ApplicationHost.config entries: <filter name="ASP.Net_4.0" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0" /> <filter name="ASP.Net_4.0_64bit" path="C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness64" /> <filter name="ASP.Net_4.0_32bit" path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\aspnet_filter.dll" enableCache="true" preCondition="runtimeVersionv4.0,bitness32" /> The key attribute we’re concerned with here is the preCondition and the bitness subvalue. Notice that the ‘generic’ version – which comes first in the filter list – has no bitness assigned to it, so it defaults to 32 bit and the 32 bit dll path. And this is where our problem comes from. The simple solution to fix the startup problem is to remove the generic entry from this list here or in the filters list shown earlier and leave only the bitness specific versions active. The preCondition attribute acts as a filter and as you can see here it filters the list by runtime version and bitness value. This is something to keep an eye out in general – if a bitness values are missing it’s easy to run into conflicts like this with any settings that are global and especially those that load modules and handlers and other executable code. On 64 bit systems it’s a good idea to explicitly set the bitness of all entries or remove the non-specific versions and add bit specific entries. So how did this get misconfigured? I installed IIS before everything else was installed on this machine and then went ahead and installed Visual Studio. I suspect the Visual Studio install munged this up as I never saw a similar problem on my live server where everything just worked right out of the box. In searching about this problem a lot of solutions pointed at using aspnet_regiis –r from the Framework64 directory, but that did not fix this extra entry in the filters list – it adds the required 32 bit and 64 bit entries, but it doesn’t remove the errand un-bitness set entry. Hopefully this post will help out anybody who runs into a similar situation without having to trouble shoot all the way down into the configuration settings and noticing the bitness settings. It’s a good lesson learned for me – this is my first desktop install of a 64 bit OS and things like this are what I was reluctant to find. Now that I ran into this I have a good idea what to look for with 32/64 bit misconfigurations in IIS at least.© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • ActiveX component can't create Object Error? Check 64 bit Status

    - by Rick Strahl
    If you're running on IIS 7 and a 64 bit operating system you might run into the following error using ASP classic or ASP.NET with COM interop. In classic ASP applications the error will show up as: ActiveX component can't create object   (Error 429) (actually without error handling the error just shows up as 500 error page) In my case the code that's been giving me problems has been a FoxPro COM object I'd been using to serve banner ads to some of my pages. The code basically looks up banners from a database table and displays them at random. The ASP classic code that uses it looks like this: <% Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> Originally this code had no specific error checking as above so the ASP pages just failed with 500 error pages from the Web server. To find out what the problem is this code is more useful at least for debugging: <% ON ERROR RESUME NEXT Set banner = Server.CreateObject("wwBanner.aspBanner") Response.Write(err.Number & " - " & err.Description) banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> which results in: 429 - ActiveX component can't create object which at least gives you a slight clue. In ASP.NET invoking the same COM object with code like this: <% dynamic banner = wwUtils.CreateComInstance("wwBanner.aspBanner") as dynamic; banner.cBANNERFILE = "wwsitebanners"; Response.Write(banner.getBanner(-1)); %> results in: Retrieving the COM class factory for component with CLSID {B5DCBB81-D5F5-11D2-B85E-00600889F23B} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). The class is in fact registered though and the COM server loads fine from a command prompt or other COM client. This error can be caused by a COM server that doesn't load. It looks like a COM registration error. There are a number of traditional reasons why this error can crop up of course. The server isn't registered (run regserver32 to register a DLL server or /regserver on an EXE server) Access permissions aren't set on the COM server (Web account has to be able to read the DLL ie. Network service) The COM server fails to load during initialization ie. failing during startup One thing I always do to check for COM errors fire up the server in a COM client outside of IIS and ensure that it works there first - it's almost always easier to debug a server outside of the Web environment. In my case I tried the server in Visual FoxPro on the server with: loBanners = CREATEOBJECT("wwBanner.aspBanner") loBanners.cBannerFile = "wwsitebanners" ? loBanners.GetBanner(-1) and it worked just fine. If you don't have a full dev environment on the server you can also use VBScript do the same thing and run the .vbs file from the command prompt: Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" MsgBox(banner.getBanner(-1)) Since this both works it tells me the server is registered and working properly. This leaves startup failures or permissions as the problem. I double checked permissions for the Application Pool and the permissions of the folder where the DLL lives and both are properly set to allow access by the Application Pool impersonated user. Just to be sure I assigned an Admin user to the Application Pool but still no go. So now what? 64 bit Servers Ahoy A couple of weeks back I had set up a few of my Application pools to 64 bit mode. My server is Server 2008 64 bit and by default Application Pools run 64 bit. Originally when I installed the server I set up most of my Application Pools to 32 bit mainly for backwards compatibility. But as more of my code migrates to 64 bit OS's I figured it'd be a good idea to see how well code runs under 64 bit code. The transition has been mostly painless. Until today when I noticed the problem with the code above when scrolling to my IIS logs and noticing a lot of 500 errors on many of my ASP classic pages. The code in question in most of these pages deals with this single simple COM object. It took a while to figure out that the problem is caused by the Application Pool running in 64 bit mode. The issue is that 32 bit COM objects (ie. my old Visual FoxPro COM component) cannot be loaded in a 64 bit Application Pool. The ASP pages using this COM component broke on the day I switched my main Application Pool into 64 bit mode but I didn't find the problem until I searched my logs for errors by pure chance. To fix this is easy enough once you know what the problem is by switching the Application Pool to Enable 32-bit Applications: Once this is done the COM objects started working correctly again. 64 bit ASP and ASP.NET with DCOM Servers This is kind of off topic, but incidentally it's possible to load 32 bit DCOM (out of process) servers from ASP.NET and ASP classic even if those applications run in 64 bit application pools. In fact, in West Wind Web Connection I use this capability to run a 64 bit ASP.NET handler that talks to a 32 bit FoxPro COM server which allows West Wind Web Connection to run in native 64 bit mode without custom configuration (which is actually quite useful). It's probably not a common usage scenario but it's good to know that you can actually access 32 bit COM objects this way from ASP.NET. For West Wind Web Connection this works out well as the DCOM interface only makes one non-chatty call to the backend server that handles all the rest of the request processing. Application Pool Isolation is your Friend For me the recent incident of failure in the classic ASP pages has just been another reminder to be very careful with moving applications to 64 bit operation. There are many little traps when switching to 64 bit that are very difficult to track and test for. I described one issue I had a couple of months ago where one of the default ASP.NET filters was loading the wrong version (32bit instead of 64bit) which was extremely difficult to track down and was caused by a very sneaky configuration switch error (basically 3 different entries for the same ISAPI filter all with different bitness settings). It took me almost a full day to track this down). Recently I've been taken to isolate individual applications into separate Application Pools rather than my past practice of combining many apps into shared AppPools. This is a good practice assuming you have enough memory to make this work. Application Pool isolate provides more modularity and allows me to selectively move applications to 64 bit. The error above came about precisely because I moved one of my most populous app pools to 64 bit and forgot about the minimal COM object use in some of my old pages. It's easy to forget. To 64bit or Not Is it worth it to move to 64 bit? Currently I'd say -not really. In my - admittedly limited - testing I don't see any significant performance increases. In fact 64 bit apps just seem to consume considerably more memory (30-50% more in my pools on average) and performance is minimally improved (less than 5% at the very best) in the load testing I've performed on a couple of sites in both modes. The only real incentive for 64 bit would be applications that require huge data spaces that exceed the 32 bit 4 gigabyte memory limit. However I have a hard time imagining an application that needs 4 gigs of memory in a single Application Pool :-). Curious to hear other opinions on benefits of 64 bit operation. © Rick Strahl, West Wind Technologies, 2005-2011Posted in COM   ASP.NET  FoxPro  

    Read the article

  • Easily use google maps, openstreet maps etc offline.

    - by samkea
    I did it and i am going to explain step by step. The explanatination may appear long but its simple if you follow. Note: All the softwares i have used are the latest and i have packaged them and provided them in the link below. I use Nokia N96 1) RootSign smartComGPS and install it on your phone(i havent provided the signer so that u wuld do some little work. i used Secman' rootsign). 2) Install Universal Maps Downloader, SmartCom OGF2 converter and OziExplorer 3.95.4s on my PC. a) UMD is used to download map tiles from any map source like googlemaps,opensourcemaps etc... and also combine the tiles into an image file like png,jpg,bmp etc... b) SmartCom OGF2 converter is used to convert the image file into a format usable on your mobile phone. c) OziExplorer will help you to calibrate the usable map file so that it can be used with GPS on your mobile phone without the use of internet. 3) Go to google maps or where u pick your maps and pan to the area of your interest. Zoom the map to at least 15 or 16 zoom level where you can see your area clearly and the streets. 4) copy this script in a notepad file and save it on your desktop: javascript:void(prompt('',gApplication.getMap().ge tCenter())); 5) Open the universal maps downloader. You will notice that you are required to add the: left longitude, right longitude,top latitude, bottom latitude. 6) On your map in google maps, doubleclick on the your prefered to most middle point. you will notice that the map will center in that area. 7) copy the script and paste it in the address bar then press enter. You will notice that a dialog with your (top latitude) and longitude respectively pops up. 8) copy the top latitude ONLY and paste it in the corresponding textbox in the UMD. 9) repeat steps 6-7 for the botton latitude. 10)repeat steps 6-7 for left longitude and right longitude too, but u have to copy the longitudes here. (***BTW record these points in the text file as they may be needed later in calibration) 11) Give the zoom level to the same zoom level that you prefered in google maps. 12) Dont forget to choose a path to save your files and under options set the proxy connection settings in UMD if you are using so. 13) Click on start and bingo! there you have your image tiles and a file with an extension .umd will be saved in the same folder. 14) On the UMD, go to tools, click on MapViewer and choose the .umd file. you will now see your map in one piece....and you will smile! 15) Still go to tools and click on map combiner. A dialog will popup for you to choose the .umd file and to enter the IMAGE file name. u can use another extension for the image file like png, jpg etc...i usually use png. 16) Combine.....bingo! there u go! u have an IMAGE file for your map. *I SUGGEST THAT CREATE A .BMP FILE and A .PNG file* 17) Close UMD and open SmartCom OGF2 converter. 18) Choose your .png image and create an ogf2 file. 19) Connect your phone to your PC in Mass Memory mode and transfer the file to the smartComGPS\Maps folder. 20) Now disconnect your phone and load smartComGPS. it will load the map and propt you to add a calibration point. Go ahead and add one calibration point with dummy coordinates. You will notice that it will add another file with extension .map in the smartComGPS\Maps folder. 21) Connect yiur ohone and copy that file and paste it in your working folder on your PC. Delete that .map file from the phone too because you are going to edit it from your PC and put it back. 22) Now Open the OziExplorer, go to file-->Load and Calibrate Map Image. 23) Choose the .bmp image and bingo! it will load with your map in the same zoom level. 24) Now you are going to calibrate. Use the MapView window and take the small box locater to all the 4 cornners of the map. You will notice that the map in the back ground moves to that area too. 25)On the right side, select the Point1 tab. Now you are in calibration mode. Now move the red box in mapview in the left upper corner to calibrate point1. 26) out of mapview go to the the left upper corner of the background map and choose poit (0,0) and your 1st calibration point. You will notice that these X,Y cordinated will be reflected in the Point1 image cordinates. 27) now go back to the text file where you saved your coordibates and enter the top latitude and the left longitude in the corresponding places. 28) Repeat steps 25-27 for point2,point3,point4 and click on save. Thats it, you have calibrated your image and you are about to finish. 29) Go to save and a dilaog which prompts you to save a .map file will poop up. Do save the map file in your working folder. 30) Right click that .map file and edit the filename in the .map file to remove the pc's directory structure. Eg. Change C\OziExplorer\data\Kampala.bmp to Kampala.ogf2. 31) Save the .map file in the smartComGPS\Maps folder on your phone. 32) now open smartComGPS on your phone and bingo! there is your map with GPS capability and in the same zoom level. 33) In smartComGPS options, choose connect and simulate. By now you should be smiling. Whoa! Hope i was of help. i case you get a problem, please inform me Below is the link to the software. regards. http://rapidshare.com/files/230296037/Utilities_Used.rar.html Ok, the Rapidshare files i posted are gone, so you will have to download as described in the solution. If you need more help, go here: http://www.dotsis.com/mobile_phone/sitemap/t-160491.html Some months later, someone else gave almost the same kind of solution here. http://www.dotsis.com/mobile_phone/sitemap/t-180123.html Note: the solutions were mean't to help view maps on Symbian phones, but i think now they ca even do for Windows Phones, iphones and others so read, extract what you want and use it. Hope it helps. Sam Kea

    Read the article

  • The DOS DEBUG Environment

    - by MarkPearl
    Today I thought I would go back in time and have a look at the DEBUG command that has been available since the beginning of dawn in DOS, MS-DOS and Microsoft Windows. up to today I always knew it was there, but had no clue on how to use it so for those that are interested this might be a great geek party trick to pull out when you want the awe the younger generation and want to show them what “real” programming is about. But wait, you will have to do it relatively quickly as it seems like DEBUG was finally dumped from the Windows group in Windows 7. Not to worry, pull out that Windows XP box which will get you even more geek points and you can still poke DEBUG a bit. So, for those that are interested and want to find out a bit about the history of DEBUG read the wiki link here. That all put aside, lets get our hands dirty.. How to Start DEBUG in Windows Make sure your version of Windows supports DEBUG. Open up a console window Make a directory where you want to play with debug – in my instance I called it C221 Enter the directory and type Debug You will get a response with a – as illustrated in the image below…   The commands available in DEBUG There are several commands available in DEBUG. The most common ones are A (Assemble) R (Register) T (Trace) G (Go) D (Dump or Display) U (Unassemble) E (Enter) P (Proceed) N (Name) L (Load) W (Write) H (Hexadecimal) I (Input) O (Output) Q (Quit) I am not going to cover all these commands, but what I will do is go through a few of them briefly. A is for Assemble Command (to write code) The A command translates assembly language statements into machine code. It is quite useful for writing small assembly programs. Below I have written a very basic assembly program. The code typed out is as follows mov ax,0015 mov cx,0023 sub cx,ax mov [120],al mov cl,[120]A nop R is for Register (to jump to a point in memory) The r command turns out to be one of the most frequent commands you will use in DEBUG. It allows you to view the contents of registers and to change their values. It can be used with the following combinations… R – Displays the contents of all the registers R f – Displays the flags register R register_name – Displays the contents of a specific register All three methods are illustrated in the image above T is for Trace (To execute a program step by step) The t command allows us to execute the program step by step. Before we can trace the program we need to point back to the beginning of the program. We do this by typing in r ip, which moves us back to memory point 100. We then type trace which executes the first line of code (line 100) (As shown in the image below starting from the red arrow). You can see from the above image that the register AX now contains 0015 as per our instruction mov ax,0015 You can also see that the IP points to line 0103 which has the MOV CX,0023 command If we type t again it will now execute the second line of the program which moves 23 in the cx register. Again, we can see that the line of code was executed and that the CX register now holds the value of 23. What I would like to highlight now is the section underlined in red. These are the status flags. The ones we are going to look at now are 1st (NV), 4th (PL), 5th (NZ) & 8th (NC) NV means no overflow, the alternate would be OV PL means that the sign of the previous arithmetic operation was Plus, the alternate would be NG (Negative) NZ means that the results of the previous arithmetic operation operation was Not Zero, the alternate would be ZR NC means that No final Carry resulted from the previous arithmetic operation. CY means that there was a final Carry. We could now follow this process of entering the t command until the entire program is executed line by line. G is for Go (To execute a program up to a certain line number) So we have looked at executing a program line by line, which is fine if your program is minuscule BUT totally unpractical if we have any decent sized program. A quicker way to run some lines of code is to use the G command. The ‘g’ command executes a program up to a certain specified point. It can be used in connection with the the reset IP command. You would set your initial point and then run the G command with the line you want to end on. P is for Proceed (Similar to trace but slightly more streamlined) Another command similar to trace is the proceed command. All that the p command does is if it is called and it encounters a CALL, INT or LOOP command it terminates the program execution. In the example below I modified our example program to include an int 20 at the end of it as illustrated in the image below… Then when executing the code when I encountered the int 20 command I typed the P command and the program terminated normally (illustrated below). D is for Dump (or for those more polite Display) So, we have all these assembly lines of code, but if you have ever opened up an exe or com file in a text/hex editor, it looks nothing like assembly code. The D command is a way that we can see what our code looks like in memory (or in a hex editor). If we examined the image above, we can see that Debug is storing our assembly code with each instruction following immediately after the previous one. For instance in memory address 110 we have int and 111 we have 20. If we examine the dump of memory we can see at memory point 110 CD is stored and at memory point 111 20 is stored. U is for Unassemble (or Convert Machine code to Assembly Code) So up to now we have gone through a bunch of commands, but probably one of the most useful is the U command. Let’s say we don’t understand machine code so well and so instead we want to see it in its equivalent assembly code. We can type the U command followed by the start memory point, followed by the end memory point and it will show us the assembly code equivalent of the machine code. E is for a bunch of things… The E command can be used for a bunch of things… One example is to enter data or machine code instructions directly into memory. It can also be used to display the contents of memory locations. I am not going to worry to much about it in this post. N / L / W is for Name, Load & Write So we have written out assembly code in debug, and now we want to save it to disk, or write it as a com file or load it. This is where the N, L & W command come in handy. The n command is used to give a name to the executable program file and is pretty simple to use. The w command is a bit trickier. It saves to disk all the memory between point bx and point cx so you need to specify the bx memory address and the cx memory address for it to write your code. Let’s look at an example illustrated below. You do this by calling the r command followed by the either bx or cx. We can then go to the directory where we were working and will see the new file with the name we specified. The L command is relatively simple. You would first specify the name of the file you would like to load using the N command, and then call the L command. Q is for Quit The last command that I am going to write about in this post is the Q command. Simply put, calling the Q command exits DEBUG. Commands we did not Cover Out of the standard DEBUG commands we covered A, T, G, D, U, E, P, R, N, L & W. The ones we did not cover were H, I & O – I might make mention of these in a later post, but for the basics they are not really needed. Some Useful Resources Please note this post is based on the COS2213 handouts for UNISA A Guide to DEBUG - http://mirror.href.com/thestarman/asm/debug/debug.htm#NT

    Read the article

  • How to change Matlab program for solving equation with finite element method?

    - by DSblizzard
    I don't know is this question more related to mathematics or programming and I'm absolute newbie in Matlab. Program FEM_50 applies the finite element method to Laplace's equation -Uxx(x, y) - Uyy(x, y) = F(x, y) in Omega. How to change it to apply FEM to equation -Uxx(x, y) - Uyy(x, y) + U(x, y) = F(x, y)? At this page: http://sc.fsu.edu/~burkardt/m_src/fem_50/fem_50.html additional code files in case you need them. function fem_50 ( ) %% FEM_50 applies the finite element method to Laplace's equation. % % Discussion: % % FEM_50 is a set of MATLAB routines to apply the finite % element method to solving Laplace's equation in an arbitrary % region, using about 50 lines of MATLAB code. % % FEM_50 is partly a demonstration, to show how little it % takes to implement the finite element method (at least using % every possible MATLAB shortcut.) The user supplies datafiles % that specify the geometry of the region and its arrangement % into triangular and quadrilateral elements, and the location % and type of the boundary conditions, which can be any mixture % of Neumann and Dirichlet. % % The unknown state variable U(x,y) is assumed to satisfy % Laplace's equation: % -Uxx(x,y) - Uyy(x,y) = F(x,y) in Omega % with Dirichlet boundary conditions % U(x,y) = U_D(x,y) on Gamma_D % and Neumann boundary conditions on the outward normal derivative: % Un(x,y) = G(x,y) on Gamma_N % If Gamma designates the boundary of the region Omega, % then we presume that % Gamma = Gamma_D + Gamma_N % but the user is free to determine which boundary conditions to % apply. Note, however, that the problem will generally be singular % unless at least one Dirichlet boundary condition is specified. % % The code uses piecewise linear basis functions for triangular elements, % and piecewise isoparametric bilinear basis functions for quadrilateral % elements. % % The user is required to supply a number of data files and MATLAB % functions that specify the location of nodes, the grouping of nodes % into elements, the location and value of boundary conditions, and % the right hand side function in Laplace's equation. Note that the % fact that the geometry is completely up to the user means that % just about any two dimensional region can be handled, with arbitrary % shape, including holes and islands. % clear % % Read the nodal coordinate data file. % load coordinates.dat; % % Read the triangular element data file. % load elements3.dat; % % Read the quadrilateral element data file. % load elements4.dat; % % Read the Neumann boundary condition data file. % I THINK the purpose of the EVAL command is to create an empty NEUMANN array % if no Neumann file is found. % eval ( 'load neumann.dat;', 'neumann=[];' ); % % Read the Dirichlet boundary condition data file. % load dirichlet.dat; A = sparse ( size(coordinates,1), size(coordinates,1) ); b = sparse ( size(coordinates,1), 1 ); % % Assembly. % for j = 1 : size(elements3,1) A(elements3(j,:),elements3(j,:)) = A(elements3(j,:),elements3(j,:)) ... + stima3(coordinates(elements3(j,:),:)); end for j = 1 : size(elements4,1) A(elements4(j,:),elements4(j,:)) = A(elements4(j,:),elements4(j,:)) ... + stima4(coordinates(elements4(j,:),:)); end % % Volume Forces. % for j = 1 : size(elements3,1) b(elements3(j,:)) = b(elements3(j,:)) ... + det( [1,1,1; coordinates(elements3(j,:),:)'] ) * ... f(sum(coordinates(elements3(j,:),:))/3)/6; end for j = 1 : size(elements4,1) b(elements4(j,:)) = b(elements4(j,:)) ... + det([1,1,1; coordinates(elements4(j,1:3),:)'] ) * ... f(sum(coordinates(elements4(j,:),:))/4)/4; end % % Neumann conditions. % if ( ~isempty(neumann) ) for j = 1 : size(neumann,1) b(neumann(j,:)) = b(neumann(j,:)) + ... norm(coordinates(neumann(j,1),:) - coordinates(neumann(j,2),:)) * ... g(sum(coordinates(neumann(j,:),:))/2)/2; end end % % Determine which nodes are associated with Dirichlet conditions. % Assign the corresponding entries of U, and adjust the right hand side. % u = sparse ( size(coordinates,1), 1 ); BoundNodes = unique ( dirichlet ); u(BoundNodes) = u_d ( coordinates(BoundNodes,:) ); b = b - A * u; % % Compute the solution by solving A * U = B for the remaining unknown values of U. % FreeNodes = setdiff ( 1:size(coordinates,1), BoundNodes ); u(FreeNodes) = A(FreeNodes,FreeNodes) \ b(FreeNodes); % % Graphic representation. % show ( elements3, elements4, coordinates, full ( u ) ); return end

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • OpenGL font rendering

    - by DEElekgolo
    I am trying to make an openGL text rendering class using FreeType. I was originally following this code but it doesn't seem to work out for me. I get nothing reguardless of what parameters I put for Draw(). class Font { public: Font() { if (FT_Init_FreeType(&ftLibrary)) { printf("Could not initialize FreeType library\n"); return; } glGenBuffers(1,&iVerts); } bool Load(std::string sFont, unsigned int Size = 12.0f) { if (FT_New_Face(ftLibrary,sFont.c_str(),0,&ftFace)) { printf("Could not open font: %s\n",sFont.c_str()); return true; } iSize = Size; FT_Set_Pixel_Sizes(ftFace,0,(int)iSize); FT_GlyphSlot gGlyph = ftFace->glyph; //Generating the texture atlas. //Rather than some amazing rectangular packing method, I'm just going //to have one long strip of letters with the height being that of the font size. int width = 0; int height = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { printf("Error rendering letter %c for font %s.\n",i,sFont.c_str()); } width += gGlyph->bitmap.width; height += std::max(height,gGlyph->bitmap.rows); } //Generate the openGL texture glActiveTexture(GL_TEXTURE0); //if I texture exists then delete it. iTexture ? glDeleteBuffers(1,&iTexture):0; glGenTextures(1,&iTexture); glBindTexture(GL_TEXTURE_2D,iTexture); glPixelStorei(GL_UNPACK_ALIGNMENT,1); glTexImage2D(GL_TEXTURE_2D,0,GL_ALPHA,width,height,0,GL_ALPHA,GL_UNSIGNED_BYTE,0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); //load the glyphs and set the glyph data int x = 0; for (int i = 32; i < 128; i++) { if (FT_Load_Char(ftFace,i,FT_LOAD_RENDER)) { //if it cant load the character continue; } //load the glyph map into the texture glTexSubImage2D(GL_TEXTURE_2D,0,x,0, gGlyph->bitmap.width, gGlyph->bitmap.rows, GL_ALPHA, GL_UNSIGNED_BYTE, gGlyph->bitmap.buffer); //move the "pen" down the strip x += gGlyph->bitmap.width; chars[i].ax = (float)(gGlyph->advance.x >> 6); chars[i].ay = (float)(gGlyph->advance.y >> 6); chars[i].bw = (float)gGlyph->bitmap.width; chars[i].bh = (float)gGlyph->bitmap.rows; chars[i].bl = (float)gGlyph->bitmap_left; chars[i].bt = (float)gGlyph->bitmap_top; chars[i].tx = (float)x/width; } printf("Loaded font: %s\n",sFont.c_str()); return true; } void Draw(std::string sString,Vector2f vPos = Vector2f(0,0),Vector2f vScale = Vector2f(1,1)) { struct pPoint { pPoint() { x = y = s = t = 0; } pPoint(float a,float b,float c,float d) { x = a; y = b; s = c; t = d; } float x,y; float s,t; }; pPoint* cCoordinates = new pPoint[6*sString.length()]; int n = 0; for (const char *p = sString.c_str(); *p; p++) { float x2 = vPos.x() + chars[*p].bl * vScale.x(); float y2 = -vPos.y() - chars[*p].bt * vScale.y(); float w = chars[*p].bw * vScale.x(); float h = chars[*p].bh * vScale.y(); float x = vPos.x() + chars[*p].ax * vScale.x(); float y = vPos.y() + chars[*p].ay * vScale.y(); //skip characters with no pixels //still advances though if (!w || !h) { continue; } //triangle one cCoordinates[n++] = pPoint( x2 , -y2 , chars[*p].tx , 0); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2 , chars[*p].tx + chars[*p].bw / w , 0); cCoordinates[n++] = pPoint( x2 , -y2-h , chars[*p].tx , chars[*p].bh / h); cCoordinates[n++] = pPoint( x2+w , -y2-h , chars[*p].tx + chars[*p].bw / w , chars[*p].bh / h); } glBindBuffer(GL_ARRAY_BUFFER,iVerts); glBindBuffer(GL_TEXTURE_2D,iTexture); //Vertices glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].x); //TexCoord 0 glClientActiveTexture(GL_TEXTURE0); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glTexCoordPointer(2,GL_FLOAT,sizeof(pPoint),&cCoordinates[0].s); glCullFace(GL_NONE); glBufferData(GL_ARRAY_BUFFER,6*sString.length(),cCoordinates,GL_DYNAMIC_DRAW); glDrawArrays(GL_TRIANGLES,0,n); glCullFace(GL_BACK); glBindBuffer(GL_ARRAY_BUFFER,0); glBindBuffer(GL_TEXTURE_2D,0); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); } ~Font() { glDeleteBuffers(1,&iVerts); glDeleteBuffers(1,&iTexture); } private: unsigned int iSize; //openGL texture atlas unsigned int iTexture; //openGL geometry buffer; unsigned int iVerts; FT_Library ftLibrary; FT_Face ftFace; struct Character { float ax,ay;//Advance float bw,bh;//bitmap size float bl,bt;//bitmap left and top float tx; } chars[128]; };

    Read the article

  • How do I make a jumping dolphin rotate realistically?

    - by Johnny
    I want to program a dolphin that jumps and rotates like a real dolphin. Jumping is not the problem, but I don't know how to make the rotation. At the moment, my dolphin rotates a little weird. But I want that it rotates like a real dolphin does. How can I improve the rotation? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D image, water; float Gravity = 5.0F; float Acceleration = 20.0F; Vector2 Position = new Vector2(1200,720); Vector2 Velocity; float rotation = 0; SpriteEffects flip; Vector2 Speed = new Vector2(0, 0); public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; } protected override void Initialize() { base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); image = Content.Load<Texture2D>("cartoondolphin"); water = Content.Load<Texture2D>("background"); flip = SpriteEffects.None; } protected override void Update(GameTime gameTime) { float VelocityX = 0f; float VelocityY = 0f; float time = (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState kbState = Keyboard.GetState(); if(kbState.IsKeyDown(Keys.Left)) { rotation = 0; flip = SpriteEffects.None; VelocityX += -5f; } if(kbState.IsKeyDown(Keys.Right)) { rotation = 0; flip = SpriteEffects.FlipHorizontally; VelocityX += 5f; } // jump if the dolphin is under water if(Position.Y >= 670) { if (kbState.IsKeyDown(Keys.A)) { if (flip == SpriteEffects.None) { rotation += 0.01f; VelocityY += 40f; } else { rotation -= 0.01f; VelocityY += 40f; } } } else { if (flip == SpriteEffects.None) { rotation -= 0.01f; VelocityY += -10f; } else { rotation += 0.01f; VelocityY += -10f; } } float deltaY = 0; float deltaX = 0; deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds; deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY); Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; Velocity.X = 0; if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight) Position.Y = graphics.PreferredBackBufferHeight - image.Height/2; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White); spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1); spriteBatch.End(); base.Draw(gameTime); } } I changed my code a little. But I still have some trouble with the rotation. Here's the entire code. The dolphin looks at the wrong direction if I press the left or right key. For example, it looks down if I press the left key. What is wrong with the rotation? At the beginning, the dolphin looks at the left side, but after I pressed a key it just looks down or up. I deleted the "rotation += 0.01f;" lines in the code. Is that correct? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D image, water; float Gravity = 5.0F; float Acceleration = 20.0F; Vector2 Position = new Vector2(1200,720); Vector2 Velocity; float rotation = 0; SpriteEffects flip; Vector2 Speed = new Vector2(0, 0); Vector2 prevPos; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; } protected override void Initialize() { base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); image = Content.Load<Texture2D>("cartoondolphin"); water = Content.Load<Texture2D>("background"); flip = SpriteEffects.None; } protected override void Update(GameTime gameTime) { float VelocityX = 0f; float VelocityY = 0f; float time = (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState kbState = Keyboard.GetState(); if(kbState.IsKeyDown(Keys.Left)) { flip = SpriteEffects.None; VelocityX += -5f; } if(kbState.IsKeyDown(Keys.Right)) { flip = SpriteEffects.FlipHorizontally; VelocityX += 5f; } rotation = (float)Math.Atan2(Position.X - prevPos.X, Position.Y - prevPos.Y); prevPos = Position; // jump if the dolphin is under water if(Position.Y >= 670) { if (kbState.IsKeyDown(Keys.A)) { if (flip == SpriteEffects.None) { VelocityY += 40f; } else { VelocityY += 40f; } } } else { if (flip == SpriteEffects.None) { VelocityY += -10f; } else { VelocityY += -10f; } } float deltaY = 0; float deltaX = 0; deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds; deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY); Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; Velocity.X = 0; if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight) Position.Y = graphics.PreferredBackBufferHeight - image.Height/2; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White); spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • Restructuring a large Chrome Extension/WebApp

    - by A.M.K
    I have a very complex Chrome Extension that has gotten too large to maintain in its current format. I'd like to restructure it, but I'm 15 and this is the first webapp or extension of it's type I've built so I have no idea how to do it. TL;DR: I have a large/complex webapp I'd like to restructure and I don't know how to do it. Should I follow my current restructure plan (below)? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? While it isn't relevant to the question, the actual code is on Github and the extension is on the webstore. The basic structure is as follows: index.html <html> <head> <link href="css/style.css" rel="stylesheet" /> <!-- This holds the main app styles --> <link href="css/widgets.css" rel="stylesheet" /> <!-- And this one holds widget styles --> </head> <body class="unloaded"> <!-- Low-level base elements are "hardcoded" here, the unloaded class is used for transitions and is removed on load. i.e: --> <div class="tab-container" tabindex="-1"> <!-- Tab nav --> </div> <!-- Templates for all parts of the application and widgets are stored as elements here. I plan on changing these to <script> elements during the restructure since <template>'s need valid HTML. --> <template id="template.toolbar"> <!-- Template content --> </template> <!-- Templates end --> <!-- Plugins --> <script type="text/javascript" src="js/plugins.js"></script> <!-- This contains the code for all widgets, I plan on moving this online and downloading as necessary soon. --> <script type="text/javascript" src="js/widgets.js"></script> <!-- This contains the main application JS. --> <script type="text/javascript" src="js/script.js"></script> </body> </html> widgets.js (initLog || (window.initLog = [])).push([new Date().getTime(), "A log is kept during page load so performance can be analyzed and errors pinpointed"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in "store" name: "Weather", // Widget name interval: 300000, // Refresh interval nicename: "weather", // HTML and JS safe widget name sizes: ["tiny", "small", "medium"], // Available widget sizes desc: "Short widget description", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: "list", nicename: "location", label: "Location(s)", placeholder: "Enter a location and press Enter" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: "medium", location: ["San Francisco, CA"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; script.js (initLog || (window.initLog = [])).push([new Date().getTime(), "These are also at the end of every file"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log("Starting page generation"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log("Tabs rendered"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == "object") { processStorage(); } Objectives of the restructure Memory usage: Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. Code readability: At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. Module interdependence: Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. Fault tolerance: There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. How I think I should do it The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded = init). Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. Widget structure should be kept largely the same, but maybe they should also be split into their own files. AFAIK you can't load all templates in a folder, therefore they need to stay inline. Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. Question: Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • Pagination links do not work after first page

    - by TheStack
    Hello, I am trying to fix this pagination script. It seems when I click on the pagination links [1][2][3][4]or[5] , it doesn't work. It just shows the first page and when clicking on the next numbers nothing happens. I hoping someone can see something in the script that I can not see. The main page looks like this (pagination.php): <?php include_once('generate_pagination.php'); ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <script type="text/javascript" src="jquery_pagination.js"></script> <div id="loading" ></div> <div id="content" data-page="1"></div> <ul id="pagination"> <?php generate_pagination() ?> </ul> <br /> <br /> <a href="#" class="category" id="marketing">Marketing</a> <a href="#" class="category" id="automotive">Automotive</a> <a href="#" class="category" id="sports">Sports</a> Then, generate_pagination.php: <?php function generate_pagination($sql) { include_once('config.php'); $per_page = 3; //Calculating no of pages $result = mysql_query($sql); $count = mysql_fetch_row($result); $pages = ceil($count[0]/$per_page); //Pagination Numbers for($i=1; $i<=$pages; $i++) { echo '<li class="page_numbers" id="'.$i.'">'.$i.'</li>'; } } $ids=$_GET['ids']; generate_pagination("SELECT COUNT(*) FROM explore WHERE category='$ids'"); ?> Here is the jquery file (jquery_pagination.js): $(document).ready(function(){ //Display Loading Image function Display_Load() { $("#loading").fadeIn(900,0); $("#loading").html("<img src='bigLoader.gif' />"); } //Hide Loading Image function Hide_Load() { $("#loading").fadeOut('slow'); }; //Default Starting Page Results $("#pagination li:first").css({'color' : '#FF0084'}).css({'border' : 'none'}); Display_Load(); $("#content").load("pagination_data.php?page=1", Hide_Load()); //Pagination Click $("#pagination li").click(function(){ Display_Load(); //CSS Styles $("#pagination li") .css({'border' : 'solid #dddddd 1px'}) .css({'color' : '#0063DC'}); $(this) .css({'color' : '#FF0084'}) .css({'border' : 'none'}); //Loading Data var pageNum = this.id; $("#content").load("pagination_data.php?page=" + pageNum, function(){ Hide_Load(); $(this).attr('data-page', pageNum); }); }); // Editing below. // Sort content Marketing $("a.category").click(function() { Display_Load(); var this_id = $(this).attr('id'); $.get("pagination.php", { category: this.id }, function(data){ //Load your results into the page var pageNum = $('#content').attr('data-page'); $("#pagination").load('generate_pagination.php?category=' + pageNum +'&ids='+ this_id ); $("#content").load("filter_marketing.php?page=" + pageNum +'&id='+ this_id, Hide_Load()); }); }); }); Lastly, filter_marketing.php (when a user clicks the filter link buttons): <?php include('config.php'); $per_page = 3; if(count($_GET)>0) { if($_GET['page']!=''){ $page=$_GET['page']; } if($_GET['id']!=''){ $id=$_GET['id']; } } $page= ($_GET['page']!='') ? $_GET['page']: false; $id= ($_GET['id']!='') ? $_GET['id']: false; $start = ($page-1)*$per_page; if($page && $id){ $sql = "SELECT * FROM explore WHERE category='$id' ORDER BY category LIMIT $start,$per_page"; } else { die('Error: missing parameters. Id= '.$id.' and page= '.$page); } $result = mysql_query($sql); ?> <table width="800px"> <?php while($row = mysql_fetch_array($result)) { $msg_id=$row['id']; $message=$row['site_description']; $site_price=$row['site_price']; ?> <tr> <td><?php echo $msg_id; ?></td> <td><?php echo $message; ?></td> <td><?php echo $site_price; ?></td> </tr> <?php } ?> </table> So, if anyone sees where the problem is occurring and can help rid of the problem, that would be great, Thank you.

    Read the article

  • android listview loadmore button with xml parsing

    - by user1780331
    Hi i have to developed listview with load more button using xml parsing in android application. Here i have faced some problem. my xml feed is empty means how can hide the load more button on last page. i have used below code here. public class CustomizedListView extends Activity { // All static variables private String URL = "http://dev.mmm.com/xctesting/xcart444pro/retrieve.php?page=1"; // XML node keys static final String KEY_SONG = "Order"; static final String KEY_TITLE = "orderid"; static final String KEY_DATE = "date"; static final String KEY_ARTIST = "payment_method"; int current_page = 1; ListView lv; LazyAdapter adapter; ProgressDialog pDialog; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); lv = (ListView) findViewById(R.id.list); ArrayList<HashMap<String, String>> songsList = new ArrayList<HashMap<String, String>>(); XMLParser parser = new XMLParser(); String xml = parser.getXmlFromUrl(URL); // getting XML from URL Document doc = parser.getDomElement(xml); // getting DOM element NodeList nl = doc.getElementsByTagName(KEY_SONG); // looping through all song nodes <song> for (int i = 0; i < nl.getLength(); i++) { // creating new HashMap HashMap<String, String> map = new HashMap<String, String>(); Element e = (Element) nl.item(i); // adding each child node to HashMap key => value map.put(KEY_ID, parser.getValue(e, KEY_ID)); map.put(KEY_TITLE, parser.getValue(e, KEY_TITLE)); map.put(KEY_ARTIST, parser.getValue(e, KEY_ARTIST)); songsList.add(map); } Button btnLoadMore = new Button(this); btnLoadMore.setText("Load More"); btnLoadMore.setBackgroundResource(R.drawable.lgnbttn); // Adding Load More button to lisview at bottom lv.addFooterView(btnLoadMore); // Getting adapter adapter = new LazyAdapter(this, songsList); lv.setAdapter(adapter); btnLoadMore.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View arg0) { // Starting a new async task new loadMoreListView().execute(); } }); } private class loadMoreListView extends AsyncTask<Void, Void, Void> { @Override protected void onPreExecute() { // Showing progress dialog before sending http request pDialog = new ProgressDialog( CustomizedListView.this); pDialog.setMessage("Please wait.."); //pDialog.setIndeterminateDrawable(getResources().getDrawable(R.drawable.my_progress_indeterminate)); pDialog.setIndeterminate(true); pDialog.setCancelable(false); pDialog.show(); pDialog.setContentView(R.layout.custom_dialog); } protected Void doInBackground(Void... unused) { current_page += 1; // Next page request URL = "http://dev.mmm.com/xctesting/xcart444pro/retrieve.php?page=" + current_page; ArrayList<HashMap<String, String>> songsList = new ArrayList<HashMap<String, String>>(); XMLParser parser = new XMLParser(); String xml = parser.getXmlFromUrl(URL); // getting XML from URL Document doc = parser.getDomElement(xml); // getting DOM element NodeList nl = doc.getElementsByTagName(KEY_SONG); NodeList nl = doc.getElementsByTagName(KEY_SONG); if (nl.getLength() == 0) { btnLoadMore.setVisibility(View.GONE); pDialog.dismiss(); } else // looping through all item nodes <item> for (int i = 0; i < nl.getLength(); i++) { // creating new HashMap HashMap<String, String> map = new HashMap<String, String>(); Element e = (Element) nl.item(i); // adding each child node to HashMap key => value map.put(KEY_ID, parser.getValue(e, KEY_ID)); map.put(KEY_TITLE, parser.getValue(e, KEY_TITLE)); map.put(KEY_ARTIST, parser.getValue(e, KEY_ARTIST)); songsList.add(map); } // get listview current position - used to maintain scroll position int currentPosition = lv.getFirstVisiblePosition(); // Appending new data to menuItems ArrayList adapter = new LazyAdapter( CustomizedListView.this, songsList); lv.setAdapter(adapter); lv.setSelectionFromTop(currentPosition + 1, 0); } }); return (null); } protected void onPostExecute(Void unused) { // closing progress dialog pDialog.dismiss(); } } } EDIT: Here i have to run the app means the listview is displayed on perpage 4 items.my last page having 1 item.please refer this screenshot:http://screencast.com/t/fTl4FETd In last page i have to click the load more button means have to go next activity and successfully hide the button on empty page..please refer this screenshot:http://screencast.com/t/wyG5zdp3r i have to check the condition for empty page: if (nl.getLength() == 0) { btnLoadMore.setVisibility(View.GONE); pDialog.dismiss(); } How can i write the conditon fot last page?????pleas ehelp me Here i wish to need the o/p is hide the button on last page. Please help me.how can i check the condition.give me some code programmatically.

    Read the article

  • ODI 11g – Insight to the SDK

    - by David Allan
    This post is a useful index into the ODI SDK that cross references the type names from the user interface with the SDK class and also the finder for how to get a handle on the object or objects. The volume of content in the SDK might seem a little ominous, there is a lot there, but there is a general pattern to the SDK that I will describe here. Also I will illustrate some basic CRUD operations so you can see how the SDK usage pattern works. The examples are written in groovy, you can simply run from the groovy console in ODI 11.1.1.6. Entry to the Platform   Object Finder SDK odiInstance odiInstance (groovy variable for console) OdiInstance Topology Objects Object Finder SDK Technology IOdiTechnologyFinder OdiTechnology Context IOdiContextFinder OdiContext Logical Schema IOdiLogicalSchemaFinder OdiLogicalSchema Data Server IOdiDataServerFinder OdiDataServer Physical Schema IOdiPhysicalSchemaFinder OdiPhysicalSchema Logical Schema to Physical Mapping IOdiContextualSchemaMappingFinder OdiContextualSchemaMapping Logical Agent IOdiLogicalAgentFinder OdiLogicalAgent Physical Agent IOdiPhysicalAgentFinder OdiPhysicalAgent Logical Agent to Physical Mapping IOdiContextualAgentMappingFinder OdiContextualAgentMapping Master Repository IOdiMasterRepositoryInfoFinder OdiMasterRepositoryInfo Work Repository IOdiWorkRepositoryInfoFinder OdiWorkRepositoryInfo Project Objects Object Finder SDK Project IOdiProjectFinder OdiProject Folder IOdiFolderFinder OdiFolder Interface IOdiInterfaceFinder OdiInterface Package IOdiPackageFinder OdiPackage Procedure IOdiUserProcedureFinder OdiUserProcedure User Function IOdiUserFunctionFinder OdiUserFunction Variable IOdiVariableFinder OdiVariable Sequence IOdiSequenceFinder OdiSequence KM IOdiKMFinder OdiKM Load Plans and Scenarios   Object Finder SDK Load Plan IOdiLoadPlanFinder OdiLoadPlan Load Plan and Scenario Folder IOdiScenarioFolderFinder OdiScenarioFolder Model Objects Object Finder SDK Model IOdiModelFinder OdiModel Sub Model IOdiSubModel OdiSubModel DataStore IOdiDataStoreFinder OdiDataStore Column IOdiColumnFinder OdiColumn Key IOdiKeyFinder OdiKey Condition IOdiConditionFinder OdiCondition Operator Objects   Object Finder SDK Session Folder IOdiSessionFolderFinder OdiSessionFolder Session IOdiSessionFinder OdiSession Schedule OdiSchedule How to Create an Object? Here is a simple example to create a project, it uses IOdiEntityManager.persist to persist the object. import oracle.odi.domain.project.OdiProject; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) project = new OdiProject("Project For Demo", "PROJECT_DEMO") odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Update an Object? This update example uses the methods on the OdiProject object to change the project’s name that was created above, it is then persisted. import oracle.odi.domain.project.OdiProject; import oracle.odi.domain.project.finder.IOdiProjectFinder; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) prjFinder = (IOdiProjectFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiProject.class); project = prjFinder.findByCode("PROJECT_DEMO"); project.setName("A Demo Project"); odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Delete an Object? Here is a simple example to delete all of the sessions, it uses IOdiEntityManager.remove to delete the object. import oracle.odi.domain.runtime.session.finder.IOdiSessionFinder; import oracle.odi.domain.runtime.session.OdiSession; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) sessFinder = (IOdiSessionFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiSession.class); sessc = sessFinder.findAll(); sessItr = sessc.iterator() while (sessItr.hasNext()) {   sess = (OdiSession) sessItr.next()   odiInstance.getTransactionalEntityManager().remove(sess) } tm.commit(txnStatus) This isn't an all encompassing summary of the SDK, but covers a lot of the content to give you a good handle on the objects and how they work. For details of how specific complex objects are created via the SDK, its best to look at postings such as the interface builder posting here. Have fun, happy coding!

    Read the article

  • Dynamic Class Inheritance For PHP

    - by VirtuosiMedia
    I have a situation where I think I might need dynamic class inheritance in PHP 5.3, but the idea doesn't sit well and I'm looking for a different design pattern to solve my problem if it's possible. Use Case I have a set of DB abstraction layer classes that dynamically compiles SQL queries, with one DAL class for each DB type (MySQL, MsSQL, Oracle, etc.). Each table in the database has its own class that extends the appropriate DAL class. The idea is that you interact with the table classes, but never directly use the DAL class. If you want to support a different DB type for your app, you don't need to rewrite any queries or even any code, you simply change a setting that swaps one DAL class out for another...and that's it. To give you a better idea of how this is used, you can take a look at the DAL class, the table classes, and how they are used on this StackExchange Code Review page. To really understand what I'm trying to do, please take a look at my implementation first before suggesting a solution. Issues The strategy that I had used previously was to have all of the DAL classes share the same class name. This eliminated autoloading, so I had to manually load the appropriate DAL class in a switch statement. However, this approach presents some problems for testing and documentation purposes, so I'd like to find a different way to solve the problem of loading the correct DAL class more elegantly. Update to clarify the issue The problem basically boils down to inconsistencies in the class name (pre-PHP 5.3) or class namespace (PHP 5.3) and its location in the directory structure. At this point, all of my DAL classes have the same name, DBObject, but reside in different folders, MySQL, Oracle, etc. My table classes all extend DBObject, but which DBObject they extend varies depending on which one has been loaded. Basically, I'm trying to have my cake and eat it too. The table classes act as a stable API and extend a dynamic backend, the DAL (DBObject) classes. It works great, but I outsmarted myself and because of the inconsistencies with the class names and their locations, I can't autoload the DBObject, which makes running unit tests and generating API docs impossible for the DBObject classes because the tests and docs rely on auto-loading. Just loading the appropriate DBObject into memory using a factory method won't work because there will be times when I need to load multiple DBObjects for testing. Because the classes currently share a name, this causes a class is already defined error. I can make exceptions for the DBObjects in my test code, obviously, but I'm looking for something a little less hacky as there may future instances where something similar would need to be done. Solutions? Worst case scenario, I can continue my current strategy, but I don't like it very much, especially as I'll soon be converting my code to PHP 5.3. I suspect that I can use some sort of dynamic inheritance via either namespaces (preferred) or a dynamic class extension, but I haven't been able to find good examples of this implemented in the wild. In your answers, please suggest either an alternate pattern that would work for this use case or an example of dynamic inheritance done right. Please assume PHP 5.3 with namespaced code. Any code examples are greatly encouraged. The preferred constraints for the solution are: DAL class can be autoloaded. DAL classes don't share the same exact same namespace, but share the same class name. As an example, I would prefer to use classes named DbObject that use namespaces like Vm\Db\MySql and Vm\Db\Oracle. Table classes don't have to be rewritten with a change in DB type. The appropriate DB type is determined via a single setting only. That setting is the only thing that should need to change to interchange DB types. Ideally, the setting check should occur only once per page load, but I'm flexible on that.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >