Search Results

Search found 7152 results on 287 pages for 'bug fixing'.

Page 228/287 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Connections to IIS sometimes get stuck in CLOSE_WAIT state

    - by randomhuman
    Our application includes an ASP.Net web service that only needs to deal with a handful of clients. As such, the 10 incoming connection limit of Windows XP Pro is generally not a problem. However, on one particular server, connections are occasionally becoming stuck in the CLOSE_WAIT state. These connections build up over time and eventually new client connections are refused because the maximum number of connections are used up. From my googling it sounds like a failure of the webservice to properly close the connection can cause this problem, but as it works just fine on hundreds of other Windows XP pro machines I can't see it being a bug in our code. It also ran fine on the affected machine until some shenanigans on the part of the end user (I think they set about deleting duplicate files in order to reduce their disk usage, but they did not exactly come clean about it). What could the user have changed to introduce this problem? Is there any way I can force connections that are in CLOSE_WAIT to time out rather than letting them hang around? I have seen suggestions to reduce TcpTimedWaitDelay, but that only relates to the TIME_WAIT state, and changing it did not have any effect.

    Read the article

  • [SOLVED} How do I restore my audio after uninstalling Ventrilo?

    - by Marcx
    Hi, I've a Dell studio 1555 bought on september with Windows 7 64bit Professional on it. The audio device works proprerly, while listening to audio contents (from disk or internet) When I use Ventrilo, the audio from other people sounds good and I hear their voices clearly When I use any other VOIP programs like Teamspeak 3, MSN or Skype, I hear a disturbed voice, and it's impossible to comprehend something... Anyway everything worked fine until I installed Ventrilo, but removing it didn´t solve my problem. Update: Here's a sample of how I hear others people voices.. Audio Sample After some tests, also the desktop has the same problem. (I tried TeamSpeak3) Here are some details on my laptop and desktop Laptop Dell Studio 1555 Core 2 Duo P8600 2.4Ghz 4Gb Ram Dual Channel Ati HD 4570 512Mb dedicated (up to 2048) IDT High Definition Audio Desktop Motherboard Asus P5KPL-AM Dual Core CPU E5200 2.50Ghz 2x2GB PC6400 Dual Channel Ati Radeon HD 4650 512MB VIA High Definition Audio Both computers have Windows 7 Professional 64Bit. So how do I restore my audio? SOLVED The problem was in router firmware, there was a bug that recognized VoIP traffic as a DOS attack and the router grambled every packet... I've installed the newest firmware and everything is fine :)

    Read the article

  • File exists but is unreadable by PHP

    - by Aron
    More than once I have ran into this issue: I have a cache file that is automatically generated by PHP. It contains some generated PHP code. However for some reason the file cannot be read and parsed by PHP. These are the symptoms: File actually exists on file system. Using Terminal you can navigate to the file, view its contents (which are fully intact), etcetc. PHP file_exists() will report that the file exists...which is correct since it does :) Then I include() the file. But when actually parsing the file, PHP will just consider it an empty file. No fatal error, just no PHP code actually executed. Again, its as if the file was completely empty (which I assure you, it is not)... It is not a permissions issue. Permissions are set as needed. Workaround: open the file in Terminal via 'nano' or some other text editor and just save it to the disk again. After that (despite no changes to the content) PHP will run it just fine... As a clarification, I'd like to add that this happens rarely, but frequently enough to be a problem. And even when it does, there are hundreds of other similar files on the same system that work without a problem... If this were an issue affecting only my own scripts, I would consider that there must be a bug in the way I generate the PHP code. But no, the issue has occurred more than once when deploying to a server (usually from Beanstalk repository via FTP). The issue has been present on various servers, Debian and Ubuntu running Zend Community Server. Any ideas? One that crossed my mind was opcode cache-ing (part of Zend Server CE)...could it be that an empty version of the file is cached if it is requested while the write operation is still in progress?

    Read the article

  • How to use non-free drivers during debian install

    - by blokeley
    I'm trying to install debian stable using unetbootin. The install process fails with "network autoconfiguration failed", probably due to the ethernet driver not working. My Lenovo U350 has a Broadcom BCM57780 which does not seem to be supported out-of-the-box: there are various bug reports here, here and here, but I don't know if the fix has made it into debian (6) stable. One discussion says that you have to use an ethernet driver from the firmware-linux-nonfree package. I'm not sure that this is correct because the BCM57780 is not in the list of drivers in firmware-linux-nonfree. The specific question tree is: Is BCM57780 supported in debian stable? If so, what could be wrong? Should I install debian unstable instead? If not, do I need to use firmware-linux-nonfree during installation and, if so, how do I do this? Please note: I've used ubuntu and debian loads in the past but please post line-by-line guidance rather than some cryptic abbreviation of any instructions. Thanks in advance for any help. Updates: Debian stable with non-free drivers did not work. Debian unstable (free drivers only) did not work. Tried loading firmware-iwlwifi_0.28_all.deb from another USB stick to get wireless working rather than BCM57780. The .deb file was found but the network configuration still failed! That's it, I'm giving up. Unfortunately I'll use ubuntu even though the Unity user interface will be very unstable for the next couple of years :(

    Read the article

  • How can I import tasks into Project that have already been started?

    - by unknown
    I am writing a feature to import tasks from an online bug tracking / project management tool into Microsoft Project, primarily for resource leveling. Currently I am importing all tasks in as Fixed work, and giving them an assignment to a single resource for 100% of the work. The duration is then dynamically calculated, which at import time is equal to the amount of work. However, I am not a project manager by any means, and am having difficulty on how to get the start dates to calculate correctly. I've never used Project either. I am using Schedule From Start and setting the Project Start Date to the date the contract was signed / work was approved. However, this can be in the past, and I do not want current tasks scheduled from that date. Should it be today then? Another problem I have is with tasks that were already started. I have remaining work set, and I was placing a constraint on them to be started on the day the work was first applied. However, the remaining work for the task would be scheduled from that date, which was sometimes in the past. Using task constraints, a project start date, and whatever other settings available to me that I don't know about, what is the correct way to have the tasks scheduled?

    Read the article

  • Gnome 3 application Icons disappeared

    - by robin.koch
    I edited my main menu settings (unchecked items inside a sub menu) when it happened that the window freezed and all entries on the left side disappeared (Office, System, Settings, Games, ...). I didn't think much about it, but when I restarted my computer all application entries in my menu and my favorites (quickstart bar on the left side) where gone. When I go to activities - applications I just see the "All" entry without any items to click on. ~/.config/menu/gnome-applications.menu is an empty file and ~/.config/menu/gnome-settings.menu has the folowing content: <!DOCTYPE Menu PUBLIC '-//freedesktop//DTD Menu 1.0//EN' 'http://standards.freedesktop.org/menu-spec/menu-1.0.dtd'> <Menu> <Name>Desktop</Name> <MergeFile type="parent">/etc/xdg/menus/gnome-settings.menu</MergeFile> </Menu> I also looked into the files under /etc/xdg/menus. They look like template files without any reference to actual installed programs. I assume that due to a bug it deleted all my menu settings. Is there any way to restore at least the default menu? Or are there any other places to look for my old configuration?

    Read the article

  • Nagios: turn off service checks/display on down hosts

    - by Alien Life Form
    I want to to tweak nagios in such a way that all checking stops (with services not displayed, or displayed as unknown) for any down node. Said differently I only want to see one alert for a down host instead of 1 (down) + n (1 for every service). Note that I am interested in service display/status, not only in turning off notifications. Rationale: we use the nagios firefox/chrome plugin to monitor status and nagios' behavior is too noisy giving readings like these (because every node has 20 services): 3 down, 1 unreachable, 4 warnings, 87 critical This means that the 7 critical services on up node (the problem is on the service) are swamped in a slab of red services which are critical only because they sit on a node that's down/unreachable. What I'd rather like to see is: 3 down, 1 unreachable, 80 unknown, 4 warnings, 7 critical Or even 3 down, 1 unreachable, 4 warnings, 7 critical I have looked in service dependencies but I did not fine a way to describe: "make all services on a alive-host dependen on the status of the host check". I found the problem discussed here, where one of the participants thought it was a nagios bug, and here where one of the participants thought it was "as designed". As things are, I am just interested in the effect, much less in the design philosophy. Note that this nagios is checking hundreds of nodes, so the maintainablilty of the solution is also important. TIA and cheers.

    Read the article

  • Reduce size of MP4

    - by testing
    I have a MP4 file with a lengh of 22:44. Here are the details: Video: width: 720 px height: 404 px data bitrate: 1022 kBit/s overall bitrate: 1182 kBit/s fps: 24 codec: H264 - MPEG4 AVC (part 10) (avc1) Audio: bitrate: 159 kBit/s stereo sample rate: 48 kHz codec: MPEG AAC Audio (mp4a) I thought I can reduce the current filesize (about 200 MB) by reducing the width and the height (420 x 236). I tried different programs: Handbrake, Format Factory, Next Video Converter and Super. The first three didn't worked as expected: Handbrake has a bug by setting the width and the height, the next two doesn't allow the fine setting of the videosize (only presets of width and height). Super seems to be the best, but I didn't found a setting which reduces the file size. I reduced the width and the height but only got 20 MB less. Now I tried the xth setting and I still get a too high file size. I want to reduce the filesize to 100 MB or less. The ouput format should be FLV or MP4, because I need this for flowplayer. Which settings of SUPER or which program should I use to reduce the file size? Of course the video should still be viewable.

    Read the article

  • Windows XP VM on VMWare ESXi 4.1 "pausing" / blocked occasionally

    - by FelixD
    We have an issue with Windows XP SP3 VMs on VMWare ESXi 4.1.0 (the free version): They sometimes seem to "pause" for several minutes. This happens rarely (maybe once a month per VM, at least noticed only that often), but still is an issue for us. It happens for three different but similar VMs on three pretty different hosts (different hardware). I have the feeling that the "pausing" is not actually the CPU blocking, but probably the harddisks, but not 100% sure. The servers have one IDE disk (C:) and one SCSI (D:) and it might be either of the two. I have seen scheduled tasks simply not starting for up to 9 minutes and then running normally again with normal speed. They were totally blocked. This is not a load issue, the VMWare hosts have average load and the VMs in question already have reserved CPU resources plus high priorities for CPU and disk. The Windows boxes run mainly MySQL, Tomcat, FileZilla server, Cygwin stuff, Java + R applications, VMWare client, Elusiva Terminal Server pro, Nagios client. Not sure if this might be related with any of that software (e.g. Elusiva). Trying to debug this, there was nothing visible in Windows Event log, other logs in C:\Windows, VMWare events etc. Unfortunately the vmware.log file ends with "Log throttled". We found that we ran into 2 VMWare bugs there: The VMWare client writes lots on bogus messages in the vmware.log, which we now disabled (log level error setting) plus the bug that VMWare does not unthrottle the log (at least so far despite VM reboots). I know there is not much guidance and that may also be the reason why I so far didn't find anything related on the web or on ServerFault, but maybe some of this rings a bell with someone? Or please direct me to what more info to post. I hope that the vmware.logs get unthrottled eventually (can't easily restart the hosts at the moment). Thanks for any input!

    Read the article

  • ffmpeg What's the difference between -to and -t options?

    - by Vantuz
    Command ffmpeg -ss 5:09 -i foo.mkv -to 5:10 -c copy bar.mkv works exactly like ffmpeg -ss 5:09 -i foo.mkv -t 5:10 -c copy bar.mkv Is it a bug? Using Zeranoe git-bd75651 for Windows 64-bit >ffmpeg -version ffmpeg version N-57906-gbd75651 built on Nov 4 2013 18:09:19 with gcc 4.8.2 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 - -disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enabl e-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --ena ble-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmo dplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enab le-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis - -enable-libvpx --enable-libwavpack --enable-libx264 --enable-libxavs --enable-li bxvid --enable-zlib libavutil 52. 51.100 / 52. 51.100 libavcodec 55. 41.100 / 55. 41.100 libavformat 55. 21.100 / 55. 21.100 libavdevice 55. 5.100 / 55. 5.100 libavfilter 3. 90.101 / 3. 90.101 libswscale 2. 5.101 / 2. 5.101 libswresample 0. 17.104 / 0. 17.104 libpostproc 52. 3.100 / 52. 3.100

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • Is 40+ Logons on Exchange 2003 per user normal?

    - by cbsch
    Hello! We've had a problem at work where users sometimes randomly can't connect to exchange. I've found out that it's because they reached the limit of 32 concurrent logons. I increased the maximum allowed connections by adding the key "Maximum Allowed Sessions Per User" in HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem. But I'm not sure if this is a real good fix. Looking at the logons some users has as many as 15 logons with the exact same logon time. I know for sure that Outlook 2007 does this, as I was watching them while a user connected with Outlook after a restart on the Exchange service. Every user also has an iPhone connected to exchange, I don't know if these cause the same thing. Is this normal? Could there be a bug in the software? (The Outlook 2007 has nothing configured, except added the user, pure vanilla installs). The users are mobile, and when Outlook generates up to 15 connection every time it connects, and I've read (no sources, sorry) that Outlook doesn't time out connections before 2 hours. I might have to set this number real high to prevent it from being a problem.

    Read the article

  • Managing records of bugs and notes

    - by Jim
    Hi. I want to create a knowledgebase for a piece of software. I'd also like to be able to track bugs and common points of failure in that application. Linking knowledgebase articles to bug records would be a real boon, as would the ability to do complex queries for particular articles and bugs on the basis of tags or metadata. I've never done anything like this before, and like to install as little as possible. I've been looking at creating a wiki with Wiki On A Stick, and it seems to offer a lot. But I can't make complex queries. I can create pages that list all 'articles' with a particular single tag, but I can't specify multiple tags or filters. Is there any software that can help? I don't want to spend money until I've tried something out thoroughly, and I'd ideally like something that demands little-to-no installation. Are there any tools that can help me? If something could easily export its data, or stored data in XML, that would be a real plus too. Otherwise, are there any simple apps that allow me to set up forms for bugs, store data as XML then query and process that XML on demand? Thanks in advance.

    Read the article

  • cannot connect to my nginx server from remote machine

    - by margincall
    I thought that it's iptables problem.. but it seems not. I really have no idea about this situation. I'm getting a server hosting(CentOS). I installed Nginx + Django and nginx uses 8080 port. A domain is connected to the server. When I executed "wget [domain]:8080/[app name]/" in the server, it worked. Of course, "wget 127.0.0.1:8080/[app name]/" has no problem. (wget [server ip]:8080/[app name]/, either) However, from other computers, connecting was failed. (message says, no route) I checked my firewall setting. I excuted these commands. iptables -I INPUT -p tcp --dport 8080 -j ACCEPT iptables -I OUTPUT -p tcp --sport 8080 -j ACCEPT iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT /etc/init.d/iptables restart I don't really understand all options of commands and I think there were useless commands, but I just tried all googled iptables settings. But still I cannot connect to my server. What should I check, first? I don't know this is important, but add to this post. On 80 port, an apache server is running. It works fine, I can connect to apache from other computers. There is DB connecting issue, (PHP to MySQL) but I think that it is just PHP coding bug. please excuse my low-level English. I'm not native English speaker.. but I tried to explane well as far as possible. Thank you for reading this question.

    Read the article

  • Word 2010 header "different first page" option does not work if you select the same built-in header preset?

    - by fredsbend
    I have a three page Word 2010 document. I have set a header on the first page and marked the "different first page" option to make the follow page headers different. It works as expected so long as I don't select the same built-in header preset for the following pages. Here's what I am doing: Check mark different first page. Make the header for the first page using Alphabet preset. Attempt to make the header for the following pages by starting with the same Alphabet preset. I only want to change the text of the following headers but still want the same graphical effects. Click off the header into the body of the document. Upon doing this the headers on the first page are updated to the ones I just made for the following pages. I don't I am doing something wrong because I can choose a different header preset and it will work as expected. If I select the same preset, however, it updates all headers, whether the "different on first page" is selected or not. If this is repeatable on others' computers the I would say it's a word bug. If not then please help me figure out how to get this working right.

    Read the article

  • Picasa duplicates everything into a `$My Pictures` folder.

    - by Barend
    My parents have a Windows XP laptop running Picasa, Dutch version of both. It's configured to put imported photos in C:\Documents and Settings\user\Mijn documenten\Mijn afbeeldingen, which is Windows XP's default My Pictures folder localized to Dutch. A while back I noticed disk space was going much faster than it should be. Turns out there was a Documents and Settings\user\Mijn documenten\$My Pictures pretty much the same size as the original one. This was well over a year ago. I figured it to be an internationalization bug, the $My Pictures thing looks like a placeholder that didn't get resolved. I figured it'd get fixed soon. It didn't. I threw out Picasa, tediously cleaned up the mess it left behind and replaced it by Windows Live Photo Gallery. My parents found WLPG to be unworkable and asked to get Picasa back. I reinstalled it, by now it's at 3.8.0 (build 117.29, 0). Wouldn't you have it, the mystery $My Pictures folder is back, 25GB of disk space has evaporated and it's the same mess it used to be. What's going on here? How do I stop it doing this?

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to stop an IOException error using whilst using a combination of jython, pyro and ant?

    - by Kelso
    So the wonderful low down on this doozie of a problem: short version: We are building a distribution system for this item of software we're using. Basically we take out build artifact, store it on an ftp server which passes it to multiple clients which execute scripts to patch their servers. Long version: 1 distribution server multiple client servers software: jython 2.5.1, ant 1.8.0, pyro 3.10 The distribution server has an FTP server and a PYRO client running on it. Each client server has a PRYO server running on it. When the PYRO client is told to start the patch procedure then it reads a machine list which contains a list of all the client servers. Then connects to each of the PYRO servers one by one and execute the patch procedure. The procedure is: getPatch (gets the latest patch for that server), StopServer (stops the software that may or maynot be accessing what needs to be patched), Apply patch, StartServer. Each of the processes calls an ANT script that passes with some folder names and other config passes around. The fun part happens when you go to apply the patch. See below for error log. I had to remove the folder names because of NDA reasons. This is where it gets interesting. Running each section of the procedure individually. i.e. running getPatch, StopServer, etc. one at a time manually. This bug doesn't happen. Physically goign to the machine and running the processes it doesn't happen. Only when we call all 4 of the processes one after the other. It occurs during the ApplyPatch phase when an ANT replace script is called on multiple files. We think it might have something to do with the JVM keeping hold of the file for a split second or 2. however this is meant to have been patched according to the bug notes on ant. so in short: distribution server == jython == pyro connection == client server == jython == ant script Error Log: <*snip>\ant\deploy.xml:12: IOException in <*snip>\bin\startGs.sh - java.io.IOException:Failed to delete <*snip>\bin\rep4698373081723114968.tmp while trying to rename it. at org.apache.tools.ant.taskdefs.Replace.processFile(Replace.java:709) at org.apache.tools.ant.taskdefs.Replace.execute(Replace.java:548) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1360) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1212) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1360) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1212) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.Extaskdefs.SubAnt.execute(SubAnt.java:302) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at net.sf.antcontrib.logic.IfTask.execute(IfTask.java:197) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.TaskAdapter.execute(TaskAdapter.java:154) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1360) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1212) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) it at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Parallel$TaskRunnable.run(Parallel.java:433) at java.lang.Thread.run(Thread.java:619) Caused by: java.io.IOException: Failed to delete <*snip\bin\rep4698373081723114968.tmp while trying to rename it. at org.apache.tools.ant.util.FileUtils.rename(FileUtils.java:1248) at org.apache.tools.ant.taskdefs.Replace.processFile(Replace.java:702) ... 125 more Any help would be appreciated.

    Read the article

  • Exceptions with DateTime parsing in RSS feed in C#

    - by hIpPy
    I'm trying to parse Rss2, Atom feeds using SyndicationFeedFormatter and SyndicationFeed objects. But I'm getting XmlExceptions while parsing DateTime field like pubDate and/or lastBuildDate. Wed, 24 Feb 2010 18:56:04 GMT+00:00 does not work Wed, 24 Feb 2010 18:56:04 GMT works So, it's throwing due to the timezone field. As a workaround, for familiar feeds I would manually fix those DateTime nodes - by catching the XmlException, loading the Rss into an XmlDocument, fixing those nodes' value, creating a new XmlReader and then returning the formatter from this new XmlReader object (code not shown). But for this approach to work, I need to know beforehand which nodes cause exception. SyndicationFeedFormatter syndicationFeedFormatter = null; XmlReaderSettings settings = new XmlReaderSettings(); using (XmlReader reader = XmlReader.Create(url, settings)) { try { syndicationFeedFormatter = SyndicationFormatterFactory.CreateFeedFormatter(reader); syndicationFeedFormatter.ReadFrom(reader); } catch (XmlException xexp) { // fix those datetime nodes with exceptions and read again. } return syndicationFeedFormatter; } rss feed: http://news.google.com/news?pz=1&cf=all&ned=us&hl=en&q=test&cf=all&output=rss exception detials: XmlException Error in line 1 position 376. An error was encountered when parsing a DateTime value in the XML. at System.ServiceModel.Syndication.Rss20FeedFormatter.DateFromString(String dateTimeString, XmlReader reader) at System.ServiceModel.Syndication.Rss20FeedFormatter.ReadXml(XmlReader reader, SyndicationFeed result) at System.ServiceModel.Syndication.Rss20FeedFormatter.ReadFrom(XmlReader reader) at ... cs:line 171 <rss version="2.0"> <channel> ... <pubDate>Wed, 24 Feb 2010 18:56:04 GMT+00:00</pubDate> <lastBuildDate>Wed, 24 Feb 2010 18:56:04 GMT+00:00</lastBuildDate> <-----exception ... <item> ... <pubDate>Wed, 24 Feb 2010 16:17:50 GMT+00:00</pubDate> <lastBuildDate>Wed, 24 Feb 2010 18:56:04 GMT+00:00</lastBuildDate> </item> ... </channel> </rss> Is there a better way to achieve this? Please help. Thanks.

    Read the article

  • TFS and shared projects in multiple solutions

    - by David Stratton
    Our .NET team works on projects for our company that fall into distinct categories. Some are internal web apps, some are external (publicly facing) web apps, we also have internal Windows applications for our corporate office users, and Windows Forms apps for our retail locations (stores). Of course, because we hate code reuse, we have a ton of code that is shared among the different applications. Currently we're using SVN as our source control, and we've got our repository laid out like this: - = folder, | = Visual Studio Solution -SVN - Internet | Ourcompany.com | Oursecondcompany.com - Intranet | UniformOrdering website | MessageCenter website - Shared | ErrorLoggingModule | RegularExpressionGenerator | Anti-Xss | OrgChartModule etc... So.. The OurCompany.com solution in the Internet folder would have a website project, and it would also include the ErrorLoggingModule, RegularExpressionGenerator, and Anti-Xss projects from the shared directory. Similarly, our UniformOrdering website solution would have each of these projects included in the solution as well. We prefer to have a project reference to a .dll reference because, first of all, if we need to add or fix a function in the ErrorLoggingModule while working on the OurCompany.com website, it's right there. Also, this allows us to build each solution and see if changes to shared code break any other applications. This should work well on a build server as well if I'm correct. In SVN, there is no problem with this. SVN and Visual Studio aren't tied together in the way TFS's source control is. We never figured out how to work this type of structure in TFS when we were using it, because in TFS, the TFS project was always tied to a Visual Studio Solution. The Source Code repository was a child of the TFS Project, so if we wanted to do this, we had to duplicate the Shared code in each TFS project's source code repository. As my co-worker put it, this "breaks every known best practice about code reuse and simplicity". It was enough of a deal breaker for us that we switched to SVN. Now, however, we're faced with truly fixing our development processes, and the Application Lifecycle Management of TFS is pretty close to exactly what we want, and how we want to work. Our one sticking point is the shared code issue. We're evaluating other commercial and open source solutions, but since we're already paying for TFS with our MSDN Subscriptions, and TFS is pretty much exactly what we want, we'd REALLY like to find a way around this issue. Has anybody else faced this and come up with a solution? If you've seen an article or posting on this that you can share with me, that would help as well. As always, I'm open to answers like "You're looking at it all wrong, bonehead, HERE'S the way it SHOULD be done.

    Read the article

  • Google maps API - info window height and panning

    - by Tim Fountain
    I'm using the Google maps API (v2) to display a country overlay over a world map. The data comes from a KML file, which contains coords for the polygons along with a HTML description for each country. This description is displayed in the 'info window' speech bubble when that country is clicked on. I had some trouble initially as the info windows were not expanding to the size of the HTML content they contained, so the longer ones would spill over the edges (this seems to be a common problem). I was able to work around this by resetting the info window to a specific height as follows: GEvent.addListener(map, "infowindowopen", function(iw) { iw = map.getInfoWindow(); iw.reset(iw.getPoint(), iw.getTabs(), new GSize(300, 295), null, null); }); Not ideal, but it works. However now, when the info windows are opened the top part of them is sometimes obscured by the edges of the map, as the map does not pan to a position where all of the content can be viewed. So my questions: Is there any way to get the info windows to automatically use a height appropriate to their content, to avoid having to fix to a set pixel height? If fixing the height is the only option, is there any way to get the map to pan to a more appropriate position when the info windows open? I know that the map class has a panTo() method, but I can't see a way to calculate what the correct coords would be. Here's my full init code: google.load("maps", "2.x"); // Call this function when the page has been loaded function initialize() { var map = new google.maps.Map2(document.getElementById("map"), {backgroundColor:'#99b3cc'}); map.addControl(new GSmallZoomControl()); map.setCenter(new google.maps.LatLng(29.01377076013671, -2.7866649627685547), 2); gae_countries = new GGeoXml("http://example.com/countries.kmz"); map.addOverlay(gae_countries); GEvent.addListener(map, "infowindowopen", function(iw) { iw = map.getInfoWindow(); iw.reset(iw.getPoint(), iw.getTabs(), new GSize(300, 295), null, null); }); } google.setOnLoadCallback(initialize);

    Read the article

  • Use a subdirectory as root with htaccess in Apache 1.3

    - by Andrew
    I'm trying to deploy a site generated with Jekyll and would like to keep the site in its own subfolder on my server to keep everything more organized. Essentially, I'd like to use the contents of /jekyll as the root unless a file similarly named exists in the actual web root. So something like /jekyll/sample-page/ would show as http://www.example.com/sample-page/, while something like /other-folder/ would display as http://www.example.com/other-folder. My test server runs Apache 2.2 and the following .htaccess (adapted from http://gist.github.com/97822) works flawlessly: RewriteEngine On # Map http://www.example.com to /jekyll. RewriteRule ^$ /jekyll/ [L] # Map http://www.example.com/x to /jekyll/x unless there is a x in the web root. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/jekyll/ RewriteRule ^(.*)$ /jekyll/$1 # Add trailing slash to directories without them so DirectoryIndex works. # This does not expose the internal URL. RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule ^(.*)$ $1/ # Disable auto-adding slashes to directories without them, since this happens # after mod_rewrite and exposes the rewritten internal URL, e.g. turning # http://www.example.com/about into http://www.example.com/jekyll/about. DirectorySlash off However, my production server runs Apache 1.3, which doesn't allow DirectorySlash. If I disable it, the server gives a 500 error because of internal redirect overload. If I comment out the last section of ReWriteConds and rules: RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule ^(.*)$ $1/ …everything mostly works: http://www.example.com/sample-page/ displays the correct content. However, if I omit the trailing slash, the URL in the address bar exposes the real internal URL structure: http://www.example.com/jekyll/sample-page/ What is the best way to account for directory slashes in Apache 1.3, where useful tools like DirectorySlash don't exist? How can I use the /jekyll/ directory as the site root without revealing the actual URL structure? Edit: After a ton of research into Apache 1.3, I've found that this problem is essentially a combination of two different issues listed at the Apache 1.3 URL Rewriting Guide. I have a (partially) moved DocumentRoot, which in theory would be taken care of with something like this: RewriteRule ^/$ /e/www/ [R] I also have the infamous "Trailing Slash Problem," which is solved by setting the RewriteBase (as was suggested in one of the responses below): RewriteBase /~quux/ RewriteRule ^foo$ foo/ [R] The problem is combining the two. Moving the document root doesn't (can't?) use RewriteBase—fixing trailing slashes requires(?) it… Hmm…

    Read the article

  • how to debug "deep" crashes in Android?

    - by eerok512
    Hi All, I've been trying to debug an android crash that is occurring without a Java Stack Trace... Java Stack Trace bugs are very easy for me to fix... but this bug I'm getting seems to be crashing inside the "NDK" or whatever it is the deep internals of Android are called... I've made no modifications to the NDK btw... I just dunno what else to call that layer hehe. Anyway I'm mainly looking for advice on deep-debug methods, rather than help with this specific problem... because I doubt I can post all the source code involved... so really I just need to know how to set breakpoints at the deep layers or whatever other methods there are to trace deep-crashes to their source... so I will briefly describe the bug and then post a LogCat. I have an app with 7 Activities Activity_INTRO Activity_EULA Activity_MAIN Activity_Contact Activity_News Activity_Library Activity_More INTRO is the initiating one... it fades in some company logos... after displaying them for a set time it jumps to the EULA activity... after the user accepts the EULA, it jumps to MAIN... MAIN then creates a TabHost and populates it with the 4 remaining activities now heres the thing... when I click on say, the More tab of the TabHost, the app pauses for a few seconds and then hard-crashes... no java stack trace, but an actual ASM level trace with the registers and IP and stack... the same thing occurs no matter which tab I select, Contact, News, Library, More... all of them crash with the same hard-crash if however I set the manifest to start the app at Activity_MAIN, bypassing the INTRO and EULA, then these crashes do not occur... so something is lingering from those opening activities that is somehow hosing the TabHost'ed Activities... and I'm wondering what the hell that could be... because I'm using finish() on those activites when they need to jump... in fact here is how I'm doing it let me know if you see any bugs: when jumping from INTRO to EULA I do: //Display the EULA Intent newIntent = new Intent (avi, Activity_EULA.class); startActivity (newIntent); finish(); and EULA to MAIN: Intent newIntent = new Intent (this, Activity_Main.class); startActivity (newIntent); finish(); anyway, here is the hard crash log... please let me know if there is some way I can reverse engineer either /system/lib/libcutils.so or /system/lib/libandroid_runtime.so, because I think the crash is happening in one of them... i think its happening in the libandroid_runtime in fact.... anyway on to the log: 12-25 00:56:07.322: INFO/DEBUG(551): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 12-25 00:56:07.332: INFO/DEBUG(551): Build fingerprint: 'generic/sdk/generic/:1.5/CUPCAKE/150240:eng/test-keys' 12-25 00:56:07.362: INFO/DEBUG(551): pid: 722, tid: 723 >>> com.killerapps.chokes <<< 12-25 00:56:07.362: INFO/DEBUG(551): signal 11 (SIGSEGV), fault addr 00000004 12-25 00:56:07.362: INFO/DEBUG(551): r0 00000004 r1 40021800 r2 00000004 r3 ad3296c5 12-25 00:56:07.372: INFO/DEBUG(551): r4 00000000 r5 00000000 r6 ad342da5 r7 41039fb8 12-25 00:56:07.372: INFO/DEBUG(551): r8 100ffcb0 r9 41039fb0 10 41e014a0 fp 00001071 12-25 00:56:07.382: INFO/DEBUG(551): ip ad35b874 sp 100ffc98 lr ad3296cf pc afb045a8 cpsr 00000010 12-25 00:56:07.552: INFO/DEBUG(551): #00 pc 000045a8 /system/lib/libcutils.so 12-25 00:56:07.572: INFO/DEBUG(551): #01 lr ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.582: INFO/DEBUG(551): stack: 12-25 00:56:07.582: INFO/DEBUG(551): 100ffc58 00000000 12-25 00:56:07.592: INFO/DEBUG(551): 100ffc5c 001c5278 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc60 000000da 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc64 0016c778 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc68 100ffcc8 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc6c 001c5278 [heap] 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc70 427d1ac0 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc74 000000c1 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc78 40021800 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc7c 000000c2 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc80 00000000 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc84 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc88 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc8c 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc90 df002777 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc94 e3a070ad 12-25 00:56:07.632: INFO/DEBUG(551): #00 100ffc98 00000000 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc9c ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.632: INFO/DEBUG(551): 100ffca0 100ffcd0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca4 ad342db5 /system/lib/libandroid_runtime.so 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca8 410a79d0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffcac ad00e3b8 /system/lib/libdvm.so 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb0 410a79d0 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb4 0016bac0 [heap] 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcb8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcbc 40021800 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc0 410a79d0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc4 afe39dd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc8 100ffcd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffccc ad040a8d /system/lib/libdvm.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd0 41039fb0 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd4 420000f8 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcdc 100ffd48 12-25 00:56:07.852: DEBUG/dalvikvm(722): GC freed 367 objects / 15144 bytes in 210ms 12-25 00:56:08.081: DEBUG/InetAddress(722): www.akillerapp.com: 74.86.47.202 (family 2, proto 6) 12-25 00:56:08.242: DEBUG/dalvikvm(722): GC freed 62 objects / 2328 bytes in 122ms 12-25 00:56:08.771: DEBUG/dalvikvm(722): GC freed 245 objects / 11744 bytes in 179ms 12-25 00:56:09.131: INFO/ActivityManager(577): Process com.killerapps.chokes (pid 722) has died. 12-25 00:56:09.171: INFO/WindowManager(577): WIN DEATH: Window{43719320 com.killerapps.chokes/com.killerapps.chokes.Activity_Main paused=false} 12-25 00:56:09.251: INFO/DEBUG(551): debuggerd committing suicide to free the zombie! 12-25 00:56:09.291: DEBUG/Zygote(553): Process 722 terminated by signal (11) 12-25 00:56:09.311: INFO/DEBUG(781): debuggerd: Jun 30 2009 17:00:51 12-25 00:56:09.331: WARN/InputManagerService(577): Got RemoteException sending setActive(false) notification to pid 722 uid 10020

    Read the article

  • How to make freelance clients understand the costs of developing and maintaining mature products?

    - by John
    I have a freelance web application project where the client requests new features every two weeks or so. I am unable to anticipate the requirements of upcoming features. So when the client requests a new feature, one of several things may happen: I implement the feature with ease because it is compatible with the existing platform I implement the feature with difficulty because I have to rewrite a significant portion of the platform's foundation Client withdraws request because it costs too much to implement against existing platform At the beginning of the project, for about six months, all feature requests fell under category 1) because the system was small and agile. But for the past six months, most feature implementation fell under category 2). The system is mature, forcing me to refactor and test everytime I want to add new modules. Additionally, I find myself breaking things that use to work, and fixing it (I don't get paid for this). The client is starting to express frustration at the time and cost for me to implement new features. To them, many of the feature requests are of the same scale as the features they requested six months ago. For example, a client would ask, "If it took you 1 week to build a ticketing system last year, why does it take you 1 month to build an event registration system today? An event registration system is much simpler than a ticketing system. It should only take you 1 week!" Because of this scenario, I fear feature requests will soon land in category 3). In fact, I'm already eating a lot of the cost myself because I volunteer many hours to support the project. The client is often shocked when I tell him honestly the time it takes to do something. The client always compares my estimates against the early months of a project. I don't think they're prepared for what it really costs to develop, maintain and support a mature web application. When working on a salary for a full time company, managers were more receptive of my estimates and even encouraged me to pad my numbers to prepare for the unexpected. Is there a way to condition my clients to think the same way? Can anyone offer advice on how I can continue to work on this web project without eating too much of the cost myself? Additional info - I've only been freelancing full time for 1 year. I don't yet have the high end clients, but I'm slowly getting there. I'm getting better quality clients as time goes by.

    Read the article

  • Hibernate Communications Link Failure in Hibernate Based Java Servlet application powered by MySQL

    - by Vatsala
    Let me describe my question - I have a Java application - Hibernate as the DB interfacing layer over MySQL. I get the communications link failure error in my application. The occurence of this error is a very specific case. I get this error , When I leave mysql server unattended for more than approximately 6 hours (i.e. when there are no queries issued to MySQL for more than approximately 6 hours). I am pasting a top 'exception' level description below, and adding a pastebin link for a detailed stacktrace description. javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Cannot open connection - Caused by: org.hibernate.exception.JDBCConnectionException: Cannot open connection - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - The last packet successfully received from the server was 1,274,868,181,212 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - The last packet successfully received from the server was 1,274,868,181,212 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. - Caused by: java.net.ConnectException: Connection refused: connect the link to the pastebin for further investigation - http://pastebin.com/4KujAmgD What I understand from these exception statements is that MySQL is refusing to take in any connections after a period of idle/nil activity. I have been reading up a bit about this via google search, and came to know that one of the possible ways to overcome this is to set values for c3p0 properties as c3p0 comes bundled with Hibernate. Specifically, I read from here http://www.mchange.com/projects/c3p0/index.html that setting two properties idleConnectionTestPeriod and preferredTestQuery will solve this for me. But these values dont seem to have had an effect. Is this the correct approach to fixing this? If not, what is the right way to get over this? The following are related Communications Link Failure questions at stackoverflow.com, but I've not found a satisfactory answer in their answers. http://stackoverflow.com/questions/2121829/java-db-communications-link-failure http://stackoverflow.com/questions/298988/how-to-handle-communication-link-failure Note 1 - i dont get this error when I am using my application continuosly. Note 2 - I use JPA with Hibernate and hence my hibernate.dialect,etc hibernate properties reside within the persistence.xml in the META-INF folder (does that prevent the c3p0 properties from working?) edit - Here are the c3p0 parameters I tried out - select 1; 2

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >