Search Results

Search found 63877 results on 2556 pages for 'mysql error 1054'.

Page 403/2556 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • SQLyog alternative

    - by JohnM2
    Do you know any app similar to SQLyog (sql gui admin tool)? Especially, I am looking for opinion from users who had used SQLyog and found "something better".

    Read the article

  • Access Denied Error in PagesListCPVEventReceiver post SharePoint SP2 upgrade

    - by JD
    I am seeing the following errors from one of the SharePoint Web Front Ends after the SP2 upgrade. Has anyone else seen this error or a solution? Event Type: Error Event Source: Windows SharePoint Services 3 Event Category: General Event ID: 6875 Date: 2009-10-27 Time: 13:09:57 User: N/A Computer: XXXXXXX Description: Error loading and running event receiver Microsoft.SharePoint.Publishing.PagesListCPVEventReceiver in Microsoft.SharePoint.Publishing, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c. Additional information is below. : Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Can't find my.cnf on my VPS

    - by dan
    Ok i am a total noob when it comes to servers (but eager to learn). I am renting a VPS so i can host a magento store. The VPS is using Centos5 and DirectAdmin and XEN virtualization. I've read a bit about how to optimize magento and one suggestion is to edit 'my.cnf'. However i can't find this file anywhere from within DirectAdmin. I also can't connect to the VPS via console as my host has a console access via their website but it won't let me enter my root password it just hangs...(how do people normally connected to their linux VPS?) Please help? ThankYou.

    Read the article

  • How to troubleshoot performance issues of PHP, MySQL and generic I/O

    - by jbx
    I have a WordPress based website running on a shared hosting. Its response time is very decent (around 2s to retrieve the HTML page and 5s to load all the resources). I was planning to move it to a dedicated virtual server (Ubuntu 12.04 LTS), which should theoretically improve things and make them more consistent given its not shared. However I observed severe performance degredation, with the page taking 10seconds to be generated. I ruled out network issues by editing /etc/hosts on the server and mapping the domain to 127.0.0.1. I used the Apache load tester ab to get the HTML, so JS, CSS and images are all excluded. It still took 10 seconds. I have Zpanel installed on the server which also uses MySQL, and its pages come up quite fast (1.5s) and also phpMyAdmin. Performing some queries on the wordpress database directly through phpMyAdmin returns them quite fast too, with query times in the 10 to 30 millisecond region. Memory is also sufficient, with only 800Mb being used of the 1Gb physical memory available, so it doesn't seem to be a swap issue either. I have also installed APC to try to improve the PHP performance, but it didn't have any effect. What else should I look for? What could be causing this degradation in performance? Could it be some kind of I/O issue since I am running on a cloud based virtual server? I wish to be able to raise the issue with my provider but without showing actual data from some diagnosis I am afraid he will just blame my application. UPDATE with sar output (every second) when I did an HTTP request: 02:31:29 CPU %user %nice %system %iowait %steal %idle 02:31:30 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:31 all 2.22 0.00 2.22 0.00 0.00 95.56 02:31:32 all 41.67 0.00 6.25 0.00 2.08 50.00 02:31:33 all 86.36 0.00 13.64 0.00 0.00 0.00 02:31:34 all 75.00 0.00 25.00 0.00 0.00 0.00 02:31:35 all 93.18 0.00 6.82 0.00 0.00 0.00 02:31:36 all 90.70 0.00 9.30 0.00 0.00 0.00 02:31:37 all 71.05 0.00 0.00 0.00 0.00 28.95 02:31:38 all 14.89 0.00 10.64 0.00 2.13 72.34 02:31:39 all 2.56 0.00 0.00 0.00 0.00 97.44 02:31:40 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:41 all 0.00 0.00 0.00 0.00 0.00 100.00 My suspicion that this comes from I/O related issue is also because a caching plugin I use to reduce the amount of queries to the database, by precompiling PHP pages is actually making things worse instead of better. It seems that file access is making things worse instead.

    Read the article

  • Camera Projection back Into 3D world, offset error

    - by Anthony
    I'm using XNA to simulate a robot in a 3D world and then do image analysis on what the camera sees. I have my camera looking down in front of the direction that the robot is going, and I have the robot detecting white pixels. I'm trying to take the white pixels that it finds and project them back into the 3D world so that I can see if it is actually detecting the correct pixels. I almost have it working, but there is an offset between where the white is in in the World and were I put my orange triangles (which represent what the robot things is white). /// <summary> /// Takes a bool map of and makes vertex positions based on the map. /// </summary> /// <param name="c"> The bool map</param> private void ProjectBoolMapOnGroundAnthony2(bool[,] c) { float triangleSize = 0.04f; // Point of interest in World W cordinate system. Vector3 pointOfInterest_W = Vector3.Zero; // Point of interest in Robot Cordinate system R Vector3 pointOfInterest_R = Vector3.Zero; // alpha is the angle from the robot camera to where it is looking in the center. //double alpha = Math.Atan(1.8f / 1); /// Matrix representation of the view determined by the position, target, and updirection. Matrix View = ((SimulationMain)Game).mainRobot.robotCameraView.View; /// Matrix representation of the view determined by the angle of the field of view (Pi/4), aspectRatio, nearest plane visible (1), and farthest plane visible (1200) Matrix Projection = ((SimulationMain)Game).mainRobot.robotCameraView.Projection; /// Matrix representing how the real world cordinates differ from that of the rendering by the camera. Matrix World = ((SimulationMain)Game).mainRobot.robotCameraView.World; Plane groundPlan = new Plane(Vector3.UnitZ, 0.0f); for (int x = 0; x < this.screenWidth; x++) { for (int y = 0; y < this.screenHeight; ) { if (c[x, y] == true && this.count1D < 62000) { int j = 1; Vector3 nearPlanePoint = Game.GraphicsDevice.Viewport.Unproject(new Vector3(x, y, 0), Projection, View, World); Vector3 farPlanePoint = Game.GraphicsDevice.Viewport.Unproject(new Vector3(x, y, 1), Projection, View, World); //Vector3 pointOfInterest_W = Vector3.in Ray ray = new Ray(nearPlanePoint, farPlanePoint); pointOfInterest_W = ray.Position + ray.Direction * (float) ray.Intersects(groundPlan); this.vertexArray2[this.count1D + 0].Position.X = pointOfInterest_W.X - triangleSize; this.vertexArray2[this.count1D + 0].Position.Y = pointOfInterest_W.Y - triangleSize * j; this.vertexArray2[this.count1D + 0].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 0].Color = Color.DarkOrange; // Put another vertex a the position but +1 in the X direction triangleSize //this.vertexArray2[this.count1D + 1].Position.X = pointOnGroud.X + 3; //this.vertexArray2[this.count1D + 1].Position.Y = pointOnGroud.Y + j; this.vertexArray2[this.count1D + 1].Position.X = pointOfInterest_W.X; this.vertexArray2[this.count1D + 1].Position.Y = pointOfInterest_W.Y + triangleSize * j; this.vertexArray2[this.count1D + 1].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 1].Color = Color.Red; // Put another vertex a the position but +1 in the X direction //this.vertexArray2[this.count1D + 0].Position.X = pointOnGroud.X; //this.vertexArray2[this.count1D + 0].Position.Y = pointOnGroud.Y + 3 + j; this.vertexArray2[this.count1D + 2].Position.X = pointOfInterest_W.X + triangleSize; this.vertexArray2[this.count1D + 2].Position.Y = pointOfInterest_W.Y - triangleSize * j; this.vertexArray2[this.count1D + 2].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 2].Color = Color.Orange; this.count1D += 3; y += j; } else { y++; } } } } The world is a grass texture with lines on it. The world plane is normal at (0,0,1). Any ideas on why there is an offset? Any Ideas? Thanks for the help, Anthony G.

    Read the article

  • Logging Apparently Not Working - Can't Set Configure Replication

    - by square_eyes
    I'm using a tutorial to set up a master (single) slave setup. When I run the command show master status; I get Empty set (0.00 sec). I have done a lot of research and believe that the log file I have set up isn't being used properly. The master is a Windows server, and I am administering it through workbench. The log file (log-bin.log) I have configured in the options exists, and is accessible by 'System'. But when I open it as a text file it doesn't have any data. So I'm assuming it's not getting used properly. How can I get the logging happening so I can finish setting up this replication? Edit: Purely for reference here is the tutorial. I followed all the steps and double checked everything up until show master status.

    Read the article

  • Can't include Javascript variable in PHP mysql_query call? [on hold]

    - by user198895
    I want the PHP mysql_query call to retrieve user values based on the Agency drop-down value but I can't get this to work. Am I unable to include the Javascript variable agency.value in PHP? <script type="text/javascript"> var agency = document.getElementById("agency"); var user = document.getElementById("user"); agency.onchange = onchange; // change options when agency is changed function onchange() { <?php include 'dbConnect.php'; ?> <?php $q = mysql_query("select id as UserID, CONCAT(LastName, ', ' , FirstName) as UserName from users where Agency = " . ?>agency.value<?php . " order by UserName");?> option_html = "<option value=0 selected>- All Users -</option>"; <?php while ($row1 = mysql_fetch_array($q)) {?> if (agency.value == 0 || agency.value == '<?php echo $row1[AgencyID]; ?>') { option_html += "<option value=<?php echo $row1[UserID]; ?>><?php echo $row1[UserName]; ?></option>"; } <?php } ?> user.innerHTML = option_html; } </script>

    Read the article

  • error while installing openoffice

    - by Maulik Shah
    I was installing openoffice using these commands sudo add-apt-repository ppa:upubuntu-com/office sudo apt-get update sudo apt-get install openoffice Now after downloading some 20MB my internet connection interrupted when again it tried to install it says as follows Err http://ppa.launchpad.net/upubuntu-com/office/ubuntu/ precise/main openoffice amd64 3.4~precise Connection failed Failed to fetch http://ppa.launchpad.net/upubuntu-com/office/ubuntu/pool/main/o/openoffice/openoffice_3.4~precise_amd64.deb Connection failed E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? how to solve these errors ?

    Read the article

  • Error installing Arch Linux

    - by Garethj94
    So, I am trying to install Arch Linux on my Acer Aspire 4830tg, but I keep running into problems. Some background knowledge, I am trying to install Arch off a usb stick and I got the iso image using bittorrent. I am also trying to install it alongside of Windows 8 (which is already installed). So when I boot into Arch linux I get this error: :: Mounting '/dev/disk/by-label/ARCH_201212' to 'run/archiso/bootmnt' Waiting 30 seconds for device /dev/disk/by-label/ARCH_201212 ... ERROR: '/dev/disk/by-label/ARCH_201212' device did not show up after 30 seconds... Falling back to interactive prompt You can try to fix the problem manually, log out when you are finished sh: can't access tty; job control turned off I know that it will work if I run it on a virtual machine but whenever I try to install it on my laptop I keep getting this error. And since you can't register for the Arch forums without a Arch terminal to run their captcha command I can't ask this on their forums.

    Read the article

  • Partial upgrade error

    - by Dan
    this is an issue I have googled a lot and I have tried a lot of fixes, but non of them really worked. At some point (I can't remember when/how) my Update system sort of broke, and since then it is always complaining about "Not all updates can be installed, run a Partial Upgrade". If I click on Partial Upgrade, I get the following result But running apt-get install -f does not fix anything, and at the end I always get the following message Funny thing is that my apt-get system works perfect on Console. I can update my system through apt-get update, apt-get upgrade etc.. So.. how can I fix the graphic interface? I understand that my apt-get system is not broken, but somehow its GUI it is. Any thoughts about it? THANKS! P.D: I have already tried sudo dpkg --configure -a and sudo apt-get autoremove

    Read the article

  • Column locking in innodb?

    - by mingyeow
    I know this sounds weird, but apparently one of my columns is locked. select * from table where type_id = 1 and updated_at < '2010-03-14' limit 1; select * from table where type_id = 3 and updated_at < '2010-03-14' limit 10; the first one would not finish running, while the second one completes smoothly. the only difference is the type_id Thanks in advance for your help - i have an urgent data job to finish, and this problem is driving me crazy

    Read the article

  • I get the following error when i open any progaram in vi please help me

    - by Adithya Chakilam
    E325: ATTENTION Found a swap file by the name ".ptr.c.swp" owned by: honey dated: Sat Oct 26 12:49:38 2013 file name: ~honey/ptr.c modified: YES user name: honey host name: honey-desktop process ID: 2542 While opening file "ptr.c" dated: Sun Nov 3 09:05:49 2013 NEWER than swap file! (1) Another program may be editing the same file. If this is the case, be careful not to end up with two different instances of the same file when making changes. Quit, or continue with caution. (2) An edit session for this file crashed. If this is the case, use ":recover" or "vim -r ptr.c" to recover the changes (see ":help recovery"). If you did this already, delete the swap file ".ptr.c.swp" to avoid this message. "ptr.c" 9L, 136C Press ENTER or type command to continue

    Read the article

  • Windows 7 backup to 3TB Seagate external drive got 0x8078002A error [migrated]

    - by Zhang18
    I'm using the Windows 7 backup and restore utility to create a system image and personal file backup to an external Seagate GoFlex 3TB disk. I got the following error: One of the backup files could not be created. Details: The request could not be performed because of an I/O device error. Error code: 0x8078002A I searched all over the internet and found these two related discussions (discussion 1 and discussion 2). Note the 1st discussion is for a Western Digital drive, which seem to have a solution with the WD Quick Formatter tool. But I downloaded that software and it cannot detect my Seagate drive. The 2nd discussion is directly relevant but it does not offer a solution. I've spent days on this and am at a loss... Please help if you know what to do to make it work! Thank you!

    Read the article

  • Database OR Array

    - by rezoner
    What is the exact point of using external database system if I have simple relations (95% querries are dependant on ID). I am storing users and their stats. Why would I use external database if I can have neat constructions like: db.users[32] = something Array of 500K users is not that big effort for RAM Pros are: no problematic asynchronity (instant results) easy export/import dealing with database like with a native object LITERALLY ps. and considerations: Would it be faster or slower to do collection[3] than db.query("select ... I am going to store it as a file/s There is only ONE application/process accessing this data, and the code is executed line by line - please don't elaborate about locking. Please don't answer with database propositions but why to use external DB over native array/object - I have experience in a few databases - that's not the case. What I am building is a client/gateway/server(s) game. Gateway deals with all users data, processing, authenticating, writing statistics e.t.c No other part of software needs to access directly to this data/database.

    Read the article

  • Game login server

    - by Tar
    I have a setup like this: A website, with a database. This database houses accounts and all details. Password hashes/salts/join dates/etc. What I want to do is to be able to use this same database for our game database. The game will be on servers in the United States while the web server and web server database is in the Netherlands. I know there is a big problem with using remote SQL and we really don't want to do that as operation of the website is just as vital as operation of the game server. We had one solution that involved sending account details to another database hosted on the same server that the gameserver is hosted on, but that was incredibly unreliable because if the website was down, no new people could register to play the game. The solution that we want is to have a log in server that is used to check credentials for everything. Is this possible/viable and could anyone point in the right direction? So, in summation: 2 game servers 1 web servers 1 central database used for authorization. The game accounts and website accounts need to be one in the same.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Error trying to run a python program

    - by Ana
    Hello guys I'm actually new to Terminal and Python. Just started following a Python tuturial on my Ubuntu and I've reached a part where it asks me to save a .py file and open it in Terminal. Only when I try to type on Terminal $ python egotrip.py I get $: command not found Then I try to type it in python and I get File "", line 1 python egotrip.py ^ SyntaxError: invalid syntax But I mean all names are correct :( Then I gave also tried /home/anacah/Desktop/python/egotrip.py And I get Traceback (most recent call last): File "", line 1, in NameError: name 'home' is not defined What am I doing wrong? :( can someone please help?

    Read the article

  • Visual Query Builder

    - by johnnyArt
    If been using "dbForge Query Builder" lately and I'm gotten used to the ease of building and testing a query, specially for those complex ones with inner joins, aliases and multiple conditionals. The expiry date of the trial is about to come, and while wanting to remain on the legal side for this I'd rather not pay the 50USD it costs (although I must say it's pretty cheap for what it does). So my question would be: Are there any free alternatives to replace this visual query builder? I've failed to find any and fear that my only two options are paying for it, or going to the dark side.

    Read the article

  • Errors related to python version added to error log when I start apache2

    - by Jean-Nicolas Boulay Desjardins
    When I start apache I am getting those errors: [Tue Jun 14 02:28:58 2011] [error] python_init: Python version mismatch, expected '2.6.5', found '2.6.6'. [Tue Jun 14 02:28:58 2011] [error] python_init: Python executable found '/usr/bin/python'. [Tue Jun 14 02:28:58 2011] [error] python_init: Python path being used '/usr/lib/python2.6/:/usr/lib/python2.6/plat-linux2:/usr/lib/python2.6/lib-tk:/usr/lib/python2.6/lib-old:/usr/lib/python2.6/lib-dynload'. [Tue Jun 14 02:28:58 2011] [notice] mod_python: Creating 8 session mutexes based on 150 max processes and 0 max threads. [Tue Jun 14 02:28:58 2011] [notice] mod_python: using mutex_directory /tmp [Tue Jun 14 02:28:58 2011] [notice] Apache/2.2.16 (Ubuntu) PHP/5.3.3-1ubuntu9.5 with Suhosin-Patch mod_python/3.3.1 Python/2.6.6 configured -- resuming normal operations I am using Ubuntu Server... Thanks in advance for any help.

    Read the article

  • DNS error only in IE

    - by Le_Quack
    Our Intranet page has stopped working on some machines/some user accounts. The error I am getting points to a DNS issue but If I ping the site from the command line the it responds fine. The error I'm gettting on IE is Error: The web filter could not find the address for the requested site Why are you seeing this: The system is unable too determine the IP address of intranet.example.com I'm not quite sure why it mentions the web filter as there is a proxy exception for the intranet page and if I run a trace route it doesn't go via the web proxy (filtering system). Finally it isn't affecting everyone, just random users, also it doesn't affect the random users on all the client machines they use. I have one user where it happens on any client they log onto where most its just certian clients. It's even "fixed" itself for a few peoples. EDIT: hey Mikey thanks for the fast response. Proxies are correct and automatic configuration is off (both via GPO)

    Read the article

  • Asterisk Outgoing CDR Logging To Mysql

    - by user3295551
    Trying to utilize the cdr logging (to mysql) using custom fields. The problem I am facing is only when an outbound call is placed, during inbound calls the custom field I am able to log no problem. The reason I am having an issue is because the custom cdr field I need is a unique value for each user on the system. sip.conf ... ... [sales_department](!) type=friend host=dynamic context=SalesAgents disallow=all allow=ulaw allow=alaw qualify=yes qualifyfreq=30 ;; company sales agents: [11](sales_agent) secret=xxxxxx callerid="<...>" [12](sales_agent) secret=xxxxxx callerid="<...>" [13](sales_agent) secret=xxxxxx callerid="<...>" [14](sales_agent) secret=xxxxxx callerid="<...>" extensions.conf [SalesAgents] include => Services ; Outbound calls exten=>_1NXXNXXXXXX,1,Dial(SIP/${EXTEN}@myprovider) ; Inbound calls exten=>100,1,NoOp() same => n,Set(CDR(agent_id)=11) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${11_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(11@asterisk) same => n,Hangup() same => n(busy),VoiceMail(11@asterisk) same => n,Hangup() exten=>101,1,NoOp() same => n,Set(CDR(agent_id)=12) same => n,CELGenUserEvent(Custom Event) same => n,Dial(${12_1},25) same => n,GotoIf($["${DIALSTATUS}" = "BUSY"]?busy:unavail) same => n(unavail),VoiceMail(12@asterisk) same => n,Hangup() same => n(busy),VoiceMail(12@asterisk) same => n,Hangup() ... ... For the inbound section of the dialplan in the above example I am able to insert the custom cdr field (agent_id). But above it you can see for the Oubound section of the dialplan I have been stumped on how I would be able to tell the dialplan which agent_id is making the outbound call. My Question: how to take the agent_id=[11] & agent_id=[12] and agent_id=[13] and agent_id=[14] etc and use that as a custom field for cdr on outbound calls?

    Read the article

  • VMWare Server 2 Install is Failing w/ Error 25032: "failed to customize windows logon process"

    - by Justin Searls
    VMWare Server 2 install question here.* Straightforward question that would probably require a VMWare expert to pull apart, given that Google has been totally worthless on this. On a patched Windows XP machine, any attempt to install VMWare Server 2.0.1 results in failure, just prior to completion (progress bar is full but I can tell network adapter stuff hasn't been fired yet and most of the services haven't been instaled). The error: Error 25032. Failed to customize Windows logon process (). Please contact your administrator. Upon dismissing the error, you're treated to: Warning 25033. Failed to remove Windows logon customization (VMGINA.DLL). Please contact your administrator. Clicking "OK" rolls back your installation. Killing the installer and hoping that it somehow leaves a working install behind was also unproductive. *I hope install troubleshooting isn't outside the purview of serverfault, I'm typically an SO user.

    Read the article

  • sql and web encoding problem

    - by Marki
    Guys, I've got an encoding problem I believe. I have upgraded from phpBB2 to phpBB3. The old databases were in latin1, the new ones have utf8 encoding. Already during the upgrade process some rows of the DB were only read partly into the new version, because of strange characters as it turned out. When I use PHP's mb_convert_encoding() function to convert those strings to UTF8 they end up e.g. as 0x0093, i.e. they must have been some kind of double quotes. Even after doing this conversion, they still show up as 0x0093 in the browser (the squares with 0093 in them when the browser does not know what to display). Can someone explain the problem here? I'm a little confused and afraid I don't see all the dependencies that need to work to have the correct encodings and the correct display thereof...

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >