Search Results

Search found 11190 results on 448 pages for 'bug report'.

Page 358/448 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • ffmpeg What's the difference between -to and -t options?

    - by Vantuz
    Command ffmpeg -ss 5:09 -i foo.mkv -to 5:10 -c copy bar.mkv works exactly like ffmpeg -ss 5:09 -i foo.mkv -t 5:10 -c copy bar.mkv Is it a bug? Using Zeranoe git-bd75651 for Windows 64-bit >ffmpeg -version ffmpeg version N-57906-gbd75651 built on Nov 4 2013 18:09:19 with gcc 4.8.2 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 - -disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enabl e-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --ena ble-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmo dplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enab le-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis - -enable-libvpx --enable-libwavpack --enable-libx264 --enable-libxavs --enable-li bxvid --enable-zlib libavutil 52. 51.100 / 52. 51.100 libavcodec 55. 41.100 / 55. 41.100 libavformat 55. 21.100 / 55. 21.100 libavdevice 55. 5.100 / 55. 5.100 libavfilter 3. 90.101 / 3. 90.101 libswscale 2. 5.101 / 2. 5.101 libswresample 0. 17.104 / 0. 17.104 libpostproc 52. 3.100 / 52. 3.100

    Read the article

  • Is 40+ Logons on Exchange 2003 per user normal?

    - by cbsch
    Hello! We've had a problem at work where users sometimes randomly can't connect to exchange. I've found out that it's because they reached the limit of 32 concurrent logons. I increased the maximum allowed connections by adding the key "Maximum Allowed Sessions Per User" in HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem. But I'm not sure if this is a real good fix. Looking at the logons some users has as many as 15 logons with the exact same logon time. I know for sure that Outlook 2007 does this, as I was watching them while a user connected with Outlook after a restart on the Exchange service. Every user also has an iPhone connected to exchange, I don't know if these cause the same thing. Is this normal? Could there be a bug in the software? (The Outlook 2007 has nothing configured, except added the user, pure vanilla installs). The users are mobile, and when Outlook generates up to 15 connection every time it connects, and I've read (no sources, sorry) that Outlook doesn't time out connections before 2 hours. I might have to set this number real high to prevent it from being a problem.

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • Managing records of bugs and notes

    - by Jim
    Hi. I want to create a knowledgebase for a piece of software. I'd also like to be able to track bugs and common points of failure in that application. Linking knowledgebase articles to bug records would be a real boon, as would the ability to do complex queries for particular articles and bugs on the basis of tags or metadata. I've never done anything like this before, and like to install as little as possible. I've been looking at creating a wiki with Wiki On A Stick, and it seems to offer a lot. But I can't make complex queries. I can create pages that list all 'articles' with a particular single tag, but I can't specify multiple tags or filters. Is there any software that can help? I don't want to spend money until I've tried something out thoroughly, and I'd ideally like something that demands little-to-no installation. Are there any tools that can help me? If something could easily export its data, or stored data in XML, that would be a real plus too. Otherwise, are there any simple apps that allow me to set up forms for bugs, store data as XML then query and process that XML on demand? Thanks in advance.

    Read the article

  • Need solution for Network/Servers.

    - by rehanplus
    Dear All, Please help me. I just joined a new Hospital and want some help managing my network. There are some requirements: Current Network: There is a D.S.L connection and that is terminated on a LINUX proxy and then connected to D-Link layer 2 switches and then providing internet to more then 200 PC's (Would be increasing to 1500 in couple of months). D-Link switches are not configured yet. Also there is one Database server Report server and an application server. In near Future Application should be accessed by local users as well as remote users from internet via our web server. We do have a sharing server and all these servers databases and PC's are on single sub net. Required Network: All i do want is to secure my network from outside access and just allowing specific users via web application and they will be submitting there record for patient card and appointment facility by means of application and entering there record (on our database) but not violating our network resources. Secondly in house users also need to access the same application and also internet but they must have some unique identity and rights (i.e. Finance lab dept. peoples do have limited access to that application). Notes: Should i create V LAN or break sub nets. Having a firewall will solve my issues? is a router needed on these type of scenario's. Currently all the access are restricted from Linux Proxy. Thanks.

    Read the article

  • Cannot browse Visual SVN Repositories

    - by user1783560
    I went through Unable to browse repository after setting visual SVN Server and tried to implement the answer suggested but even the IP address is not solving my problem. I have 1) Turned off firewall 2) Set the directory C:\Program Files (x86)\VisualSVN Server security rights to "full control" for all groups including "Network service" group. 3) Set the directory D:\Repositories security rights to "full control" for most groups including "Network service" group. 4) Tried switching between secure and non-secure connection option in visual svn server manager. 5) Tried different with port numbers. Including 443 (with https) and 80 (http). Also tried giving random ports with/without https. Using computer name/ IP address in the url doesn't help. Windows network diagnostics - Troubleshooting report has following message for me. Security policy settings or firewall settings on this computer might be blocking the connection. = Detected Can someone guide me please? I have spent more than two weeks on this now without any solution. Thanks in advance! I appreciate your responses.

    Read the article

  • Make eix available version match emerge

    - by Ryaner
    We have out Gentoo hosts using a binhost with EMERGE_DEFAULT_OPTS="--getbinpkgonly --usepkgonly" in the make.conf file so that the host only pulled down the binary hosts. All works well from that side. I use eix to check on software versions for upgrades but have hit a problem where eix will see an available version ahead of what is available on the binserver. Using glibc as an example ietpl [VE] / # emerge -s glibc Searching... [ Results for search key : glibc ] [ Applications found : 1 ] * sys-libs/glibc Latest version available: 2.14.1-r3 Latest version installed: 2.14.1-r3 Homepage: Description: GNU libc6 (also called glibc2) C library License: LGPL-2 Then eix reports a higher version available ietpl [VE] / # export LASTVERSION='{last}<version>{}' ietpl [VE] / # /usr/bin/eix --nocolor --format '<category> <name> [<installedversions:LASTVERSION>] [<bestversion:LASTVERSION>] \n' --exact --category-name sys-libs/glibc sys-libs glibc [2.14.1-r3] [2.15-r2] What I'm after is for eix to report the latest version available as 2.14.1-r3 like emerge. I've a feeling this is possible since without any formatting, eix returns Available versions: (2.2) ~2.9_p20081201-r3!s 2.10.1-r1!s 2.11.3!s ~2.12.1-r3!s 2.12.2!s{tbz2} ~2.13-r2!s 2.13-r4!s ~2.14!s ~2.14.1-r2!s 2.14.1-r3!s{tbz2} ~2.15-r1!s 2.15-r2!s ~2.15-r3!s **2.16.0!s **9999!s correctly tagging the latest unmasked binary package with {tbz2} I would have thought that the binary flag would do it, but that returns no matches --binary Match packages with *.tbz2 files.

    Read the article

  • cannot connect to my nginx server from remote machine

    - by margincall
    I thought that it's iptables problem.. but it seems not. I really have no idea about this situation. I'm getting a server hosting(CentOS). I installed Nginx + Django and nginx uses 8080 port. A domain is connected to the server. When I executed "wget [domain]:8080/[app name]/" in the server, it worked. Of course, "wget 127.0.0.1:8080/[app name]/" has no problem. (wget [server ip]:8080/[app name]/, either) However, from other computers, connecting was failed. (message says, no route) I checked my firewall setting. I excuted these commands. iptables -I INPUT -p tcp --dport 8080 -j ACCEPT iptables -I OUTPUT -p tcp --sport 8080 -j ACCEPT iptables -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8080 -j ACCEPT /etc/init.d/iptables restart I don't really understand all options of commands and I think there were useless commands, but I just tried all googled iptables settings. But still I cannot connect to my server. What should I check, first? I don't know this is important, but add to this post. On 80 port, an apache server is running. It works fine, I can connect to apache from other computers. There is DB connecting issue, (PHP to MySQL) but I think that it is just PHP coding bug. please excuse my low-level English. I'm not native English speaker.. but I tried to explane well as far as possible. Thank you for reading this question.

    Read the article

  • Word 2010 header "different first page" option does not work if you select the same built-in header preset?

    - by fredsbend
    I have a three page Word 2010 document. I have set a header on the first page and marked the "different first page" option to make the follow page headers different. It works as expected so long as I don't select the same built-in header preset for the following pages. Here's what I am doing: Check mark different first page. Make the header for the first page using Alphabet preset. Attempt to make the header for the following pages by starting with the same Alphabet preset. I only want to change the text of the following headers but still want the same graphical effects. Click off the header into the body of the document. Upon doing this the headers on the first page are updated to the ones I just made for the following pages. I don't I am doing something wrong because I can choose a different header preset and it will work as expected. If I select the same preset, however, it updates all headers, whether the "different on first page" is selected or not. If this is repeatable on others' computers the I would say it's a word bug. If not then please help me figure out how to get this working right.

    Read the article

  • Picasa duplicates everything into a `$My Pictures` folder.

    - by Barend
    My parents have a Windows XP laptop running Picasa, Dutch version of both. It's configured to put imported photos in C:\Documents and Settings\user\Mijn documenten\Mijn afbeeldingen, which is Windows XP's default My Pictures folder localized to Dutch. A while back I noticed disk space was going much faster than it should be. Turns out there was a Documents and Settings\user\Mijn documenten\$My Pictures pretty much the same size as the original one. This was well over a year ago. I figured it to be an internationalization bug, the $My Pictures thing looks like a placeholder that didn't get resolved. I figured it'd get fixed soon. It didn't. I threw out Picasa, tediously cleaned up the mess it left behind and replaced it by Windows Live Photo Gallery. My parents found WLPG to be unworkable and asked to get Picasa back. I reinstalled it, by now it's at 3.8.0 (build 117.29, 0). Wouldn't you have it, the mystery $My Pictures folder is back, 25GB of disk space has evaporated and it's the same mess it used to be. What's going on here? How do I stop it doing this?

    Read the article

  • Reporting SQL Vulnerability [migrated]

    - by Ciaran87Bel
    My first post here so i'll hopefully keep it simple. I have just finished building a CMS targeted at a certain industry and built a test site to see how everything works. Anyway I wrote a program to check for sql injection vulnerabilities and the program followed a blog link to an external website. The program discovered that the external site had a massive vulnerability that left it open to practically anyone who could then access every bit of data on their MYSQL Server and run queries etc. The thing is this external site is the brand leader in their industry and do millions upon millions of sales per annum. I have tried contacting them to let them know and even went as far as contacting the company that built their platform but I was pretty much brushed off and haven't heard back from them. Their database would contain the details of hundreds of thousands of customers and all their data. I could easily make myself site admin etc in a few seconds but they won't listen to me even though I have offered to share the vulnerability with them and help in anyway I can. Is there anything else I can do because it is one of the biggest security risks I have ever personally come across. Is there any other steps I should take to report this? Thanks

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Using R to Analyze G1GC Log Files

    - by user12620111
    Using R to Analyze G1GC Log Files body, td { font-family: sans-serif; background-color: white; font-size: 12px; margin: 8px; } tt, code, pre { font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace; } h1 { font-size:2.2em; } h2 { font-size:1.8em; } h3 { font-size:1.4em; } h4 { font-size:1.0em; } h5 { font-size:0.9em; } h6 { font-size:0.8em; } a:visited { color: rgb(50%, 0%, 50%); } pre { margin-top: 0; max-width: 95%; border: 1px solid #ccc; white-space: pre-wrap; } pre code { display: block; padding: 0.5em; } code.r, code.cpp { background-color: #F8F8F8; } table, td, th { border: none; } blockquote { color:#666666; margin:0; padding-left: 1em; border-left: 0.5em #EEE solid; } hr { height: 0px; border-bottom: none; border-top-width: thin; border-top-style: dotted; border-top-color: #999999; } @media print { * { background: transparent !important; color: black !important; filter:none !important; -ms-filter: none !important; } body { font-size:12pt; max-width:100%; } a, a:visited { text-decoration: underline; } hr { visibility: hidden; page-break-before: always; } pre, blockquote { padding-right: 1em; page-break-inside: avoid; } tr, img { page-break-inside: avoid; } img { max-width: 100% !important; } @page :left { margin: 15mm 20mm 15mm 10mm; } @page :right { margin: 15mm 10mm 15mm 20mm; } p, h2, h3 { orphans: 3; widows: 3; } h2, h3 { page-break-after: avoid; } } pre .operator, pre .paren { color: rgb(104, 118, 135) } pre .literal { color: rgb(88, 72, 246) } pre .number { color: rgb(0, 0, 205); } pre .comment { color: rgb(76, 136, 107); } pre .keyword { color: rgb(0, 0, 255); } pre .identifier { color: rgb(0, 0, 0); } pre .string { color: rgb(3, 106, 7); } var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/"}while(y.length||w.length){var v=u().splice(0,1)[0];z+=m(x.substr(q,v.offset-q));q=v.offset;if(v.event=="start"){z+=t(v.node);s.push(v.node)}else{if(v.event=="stop"){var p,r=s.length;do{r--;p=s[r];z+=("")}while(p!=v.node);s.splice(r,1);while(r'+M[0]+""}else{r+=M[0]}O=P.lR.lastIndex;M=P.lR.exec(L)}return r+L.substr(O,L.length-O)}function J(L,M){if(M.sL&&e[M.sL]){var r=d(M.sL,L);x+=r.keyword_count;return r.value}else{return F(L,M)}}function I(M,r){var L=M.cN?'':"";if(M.rB){y+=L;M.buffer=""}else{if(M.eB){y+=m(r)+L;M.buffer=""}else{y+=L;M.buffer=r}}D.push(M);A+=M.r}function G(N,M,Q){var R=D[D.length-1];if(Q){y+=J(R.buffer+N,R);return false}var P=q(M,R);if(P){y+=J(R.buffer+N,R);I(P,M);return P.rB}var L=v(D.length-1,M);if(L){var O=R.cN?"":"";if(R.rE){y+=J(R.buffer+N,R)+O}else{if(R.eE){y+=J(R.buffer+N,R)+O+m(M)}else{y+=J(R.buffer+N+M,R)+O}}while(L1){O=D[D.length-2].cN?"":"";y+=O;L--;D.length--}var r=D[D.length-1];D.length--;D[D.length-1].buffer="";if(r.starts){I(r.starts,"")}return R.rE}if(w(M,R)){throw"Illegal"}}var E=e[B];var D=[E.dM];var A=0;var x=0;var y="";try{var s,u=0;E.dM.buffer="";do{s=p(C,u);var t=G(s[0],s[1],s[2]);u+=s[0].length;if(!t){u+=s[1].length}}while(!s[2]);if(D.length1){throw"Illegal"}return{r:A,keyword_count:x,value:y}}catch(H){if(H=="Illegal"){return{r:0,keyword_count:0,value:m(C)}}else{throw H}}}function g(t){var p={keyword_count:0,r:0,value:m(t)};var r=p;for(var q in e){if(!e.hasOwnProperty(q)){continue}var s=d(q,t);s.language=q;if(s.keyword_count+s.rr.keyword_count+r.r){r=s}if(s.keyword_count+s.rp.keyword_count+p.r){r=p;p=s}}if(r.language){p.second_best=r}return p}function i(r,q,p){if(q){r=r.replace(/^((]+|\t)+)/gm,function(t,w,v,u){return w.replace(/\t/g,q)})}if(p){r=r.replace(/\n/g,"")}return r}function n(t,w,r){var x=h(t,r);var v=a(t);var y,s;if(v){y=d(v,x)}else{return}var q=c(t);if(q.length){s=document.createElement("pre");s.innerHTML=y.value;y.value=k(q,c(s),x)}y.value=i(y.value,w,r);var u=t.className;if(!u.match("(\\s|^)(language-)?"+v+"(\\s|$)")){u=u?(u+" "+v):v}if(/MSIE [678]/.test(navigator.userAgent)&&t.tagName=="CODE"&&t.parentNode.tagName=="PRE"){s=t.parentNode;var p=document.createElement("div");p.innerHTML=""+y.value+"";t=p.firstChild.firstChild;p.firstChild.cN=s.cN;s.parentNode.replaceChild(p.firstChild,s)}else{t.innerHTML=y.value}t.className=u;t.result={language:v,kw:y.keyword_count,re:y.r};if(y.second_best){t.second_best={language:y.second_best.language,kw:y.second_best.keyword_count,re:y.second_best.r}}}function o(){if(o.called){return}o.called=true;var r=document.getElementsByTagName("pre");for(var p=0;p|=||=||=|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~";this.ER="(?![\\s\\S])";this.BE={b:"\\\\.",r:0};this.ASM={cN:"string",b:"'",e:"'",i:"\\n",c:[this.BE],r:0};this.QSM={cN:"string",b:'"',e:'"',i:"\\n",c:[this.BE],r:0};this.CLCM={cN:"comment",b:"//",e:"$"};this.CBLCLM={cN:"comment",b:"/\\*",e:"\\*/"};this.HCM={cN:"comment",b:"#",e:"$"};this.NM={cN:"number",b:this.NR,r:0};this.CNM={cN:"number",b:this.CNR,r:0};this.BNM={cN:"number",b:this.BNR,r:0};this.inherit=function(r,s){var p={};for(var q in r){p[q]=r[q]}if(s){for(var q in s){p[q]=s[q]}}return p}}();hljs.LANGUAGES.cpp=function(){var a={keyword:{"false":1,"int":1,"float":1,"while":1,"private":1,"char":1,"catch":1,"export":1,virtual:1,operator:2,sizeof:2,dynamic_cast:2,typedef:2,const_cast:2,"const":1,struct:1,"for":1,static_cast:2,union:1,namespace:1,unsigned:1,"long":1,"throw":1,"volatile":2,"static":1,"protected":1,bool:1,template:1,mutable:1,"if":1,"public":1,friend:2,"do":1,"return":1,"goto":1,auto:1,"void":2,"enum":1,"else":1,"break":1,"new":1,extern:1,using:1,"true":1,"class":1,asm:1,"case":1,typeid:1,"short":1,reinterpret_cast:2,"default":1,"double":1,register:1,explicit:1,signed:1,typename:1,"try":1,"this":1,"switch":1,"continue":1,wchar_t:1,inline:1,"delete":1,alignof:1,char16_t:1,char32_t:1,constexpr:1,decltype:1,noexcept:1,nullptr:1,static_assert:1,thread_local:1,restrict:1,_Bool:1,complex:1},built_in:{std:1,string:1,cin:1,cout:1,cerr:1,clog:1,stringstream:1,istringstream:1,ostringstream:1,auto_ptr:1,deque:1,list:1,queue:1,stack:1,vector:1,map:1,set:1,bitset:1,multiset:1,multimap:1,unordered_set:1,unordered_map:1,unordered_multiset:1,unordered_multimap:1,array:1,shared_ptr:1}};return{dM:{k:a,i:"",k:a,r:10,c:["self"]}]}}}();hljs.LANGUAGES.r={dM:{c:[hljs.HCM,{cN:"number",b:"\\b0[xX][0-9a-fA-F]+[Li]?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+(?:[eE][+\\-]?\\d*)?L\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+\\.(?!\\d)(?:i\\b)?",e:hljs.IMMEDIATE_RE,r:1},{cN:"number",b:"\\b\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\.\\d+(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"keyword",b:"(?:tryCatch|library|setGeneric|setGroupGeneric)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\.",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\d+(?![\\w.])",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\b(?:function)",e:hljs.IMMEDIATE_RE,r:2},{cN:"keyword",b:"(?:if|in|break|next|repeat|else|for|return|switch|while|try|stop|warning|require|attach|detach|source|setMethod|setClass)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"literal",b:"(?:NA|NA_integer_|NA_real_|NA_character_|NA_complex_)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"literal",b:"(?:NULL|TRUE|FALSE|T|F|Inf|NaN)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"identifier",b:"[a-zA-Z.][a-zA-Z0-9._]*\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"operator",b:"|=||   Using R to Analyze G1GC Log Files   Using R to Analyze G1GC Log Files Introduction Working in Oracle Platform Integration gives an engineer opportunities to work on a wide array of technologies. My team’s goal is to make Oracle applications run best on the Solaris/SPARC platform. When looking for bottlenecks in a modern applications, one needs to be aware of not only how the CPUs and operating system are executing, but also network, storage, and in some cases, the Java Virtual Machine. I was recently presented with about 1.5 GB of Java Garbage First Garbage Collector log file data. If you’re not familiar with the subject, you might want to review Garbage First Garbage Collector Tuning by Monica Beckwith. The customer had been running Java HotSpot 1.6.0_31 to host a web application server. I was told that the Solaris/SPARC server was running a Java process launched using a commmand line that included the following flags: -d64 -Xms9g -Xmx9g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=80 -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintFlagsFinal -XX:+DisableExplicitGC -XX:+UnlockExperimentalVMOptions -XX:ParallelGCThreads=8 Several sources on the internet indicate that if I were to print out the 1.5 GB of log files, it would require enough paper to fill the bed of a pick up truck. Of course, it would be fruitless to try to scan the log files by hand. Tools will be required to summarize the contents of the log files. Others have encountered large Java garbage collection log files. There are existing tools to analyze the log files: IBM’s GC toolkit The chewiebug GCViewer gchisto HPjmeter Instead of using one of the other tools listed, I decide to parse the log files with standard Unix tools, and analyze the data with R. Data Cleansing The log files arrived in two different formats. I guess that the difference is that one set of log files was generated using a more verbose option, maybe -XX:+PrintHeapAtGC, and the other set of log files was generated without that option. Format 1 In some of the log files, the log files with the less verbose format, a single trace, i.e. the report of a singe garbage collection event, looks like this: {Heap before GC invocations=12280 (full 61): garbage-first heap total 9437184K, used 7499918K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 1 young (4096K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. 2014-05-14T07:24:00.988-0700: 60586.353: [GC pause (young) 7324M->7320M(9216M), 0.1567265 secs] Heap after GC invocations=12281 (full 61): garbage-first heap total 9437184K, used 7496533K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 0 young (0K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. } A simple grep can be used to extract a summary: $ grep "\[ GC pause (young" g1gc.log 2014-05-13T13:24:35.091-0700: 3.109: [GC pause (young) 20M->5029K(9216M), 0.0146328 secs] 2014-05-13T13:24:35.440-0700: 3.459: [GC pause (young) 9125K->6077K(9216M), 0.0086723 secs] 2014-05-13T13:24:37.581-0700: 5.599: [GC pause (young) 25M->8470K(9216M), 0.0203820 secs] 2014-05-13T13:24:42.686-0700: 10.704: [GC pause (young) 44M->15M(9216M), 0.0288848 secs] 2014-05-13T13:24:48.941-0700: 16.958: [GC pause (young) 51M->20M(9216M), 0.0491244 secs] 2014-05-13T13:24:56.049-0700: 24.066: [GC pause (young) 92M->26M(9216M), 0.0525368 secs] 2014-05-13T13:25:34.368-0700: 62.383: [GC pause (young) 602M->68M(9216M), 0.1721173 secs] But that format wasn't easily read into R, so I needed to be a bit more tricky. I used the following Unix command to create a summary file that was easy for R to read. $ echo "SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime" $ grep "\[GC pause (young" g1gc.log | grep -v mark | sed -e 's/[A-SU-z\(\),]/ /g' -e 's/->/ /' -e 's/: / /g' | more SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime 2014-05-13T13:24:35.091-0700 3.109 20 5029 9216 0.0146328 2014-05-13T13:24:35.440-0700 3.459 9125 6077 9216 0.0086723 2014-05-13T13:24:37.581-0700 5.599 25 8470 9216 0.0203820 2014-05-13T13:24:42.686-0700 10.704 44 15 9216 0.0288848 2014-05-13T13:24:48.941-0700 16.958 51 20 9216 0.0491244 2014-05-13T13:24:56.049-0700 24.066 92 26 9216 0.0525368 2014-05-13T13:25:34.368-0700 62.383 602 68 9216 0.1721173 Format 2 In some of the log files, the log files with the more verbose format, a single trace, i.e. the report of a singe garbage collection event, was more complicated than Format 1. Here is a text file with an example of a single G1GC trace in the second format. As you can see, it is quite complicated. It is nice that there is so much information available, but the level of detail can be overwhelming. I wrote this awk script (download) to summarize each trace on a single line. #!/usr/bin/env awk -f BEGIN { printf("SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize\n") } ###################### # Save count data from lines that are at the start of each G1GC trace. # Each trace starts out like this: # {Heap before GC invocations=14 (full 0): # garbage-first heap total 9437184K, used 325496K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) ###################### /{Heap.*full/{ gsub ( "\\)" , "" ); nf=split($0,a,"="); split(a[2],b," "); getline; if ( match($0, "first") ) { G1GC=1; IncrementalCount=b[1]; FullCount=substr( b[3], 1, length(b[3])-1 ); } else { G1GC=0; } } ###################### # Pull out time stamps that are in lines with this format: # 2014-05-12T14:02:06.025-0700: 94.312: [GC pause (young), 0.08870154 secs] ###################### /GC pause/ { DateTime=$1; SecondsSinceLaunch=substr($2, 1, length($2)-1); } ###################### # Heap sizes are in lines that look like this: # [ 4842M->4838M(9216M)] ###################### /\[ .*]$/ { gsub ( "\\[" , "" ); gsub ( "\ \]" , "" ); gsub ( "->" , " " ); gsub ( "\\( " , " " ); gsub ( "\ \)" , " " ); split($0,a," "); if ( split(a[1],b,"M") > 1 ) {BeforeSize=b[1]*1024;} if ( split(a[1],b,"K") > 1 ) {BeforeSize=b[1];} if ( split(a[2],b,"M") > 1 ) {AfterSize=b[1]*1024;} if ( split(a[2],b,"K") > 1 ) {AfterSize=b[1];} if ( split(a[3],b,"M") > 1 ) {TotalSize=b[1]*1024;} if ( split(a[3],b,"K") > 1 ) {TotalSize=b[1];} } ###################### # Emit an output line when you find input that looks like this: # [Times: user=1.41 sys=0.08, real=0.24 secs] ###################### /\[Times/ { if (G1GC==1) { gsub ( "," , "" ); split($2,a,"="); UserTime=a[2]; split($3,a,"="); SysTime=a[2]; split($4,a,"="); RealTime=a[2]; print DateTime,SecondsSinceLaunch,IncrementalCount,FullCount,UserTime,SysTime,RealTime,BeforeSize,AfterSize,TotalSize; G1GC=0; } } The resulting summary is about 25X smaller that the original file, but still difficult for a human to digest. SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ... 2014-05-12T18:36:34.669-0700: 3985.744 561 0 0.57 0.06 0.16 1724416 1720320 9437184 2014-05-12T18:36:34.839-0700: 3985.914 562 0 0.51 0.06 0.19 1724416 1720320 9437184 2014-05-12T18:36:35.069-0700: 3986.144 563 0 0.60 0.04 0.27 1724416 1721344 9437184 2014-05-12T18:36:35.354-0700: 3986.429 564 0 0.33 0.04 0.09 1725440 1722368 9437184 2014-05-12T18:36:35.545-0700: 3986.620 565 0 0.58 0.04 0.17 1726464 1722368 9437184 2014-05-12T18:36:35.726-0700: 3986.801 566 0 0.43 0.05 0.12 1726464 1722368 9437184 2014-05-12T18:36:35.856-0700: 3986.930 567 0 0.30 0.04 0.07 1726464 1723392 9437184 2014-05-12T18:36:35.947-0700: 3987.023 568 0 0.61 0.04 0.26 1727488 1723392 9437184 2014-05-12T18:36:36.228-0700: 3987.302 569 0 0.46 0.04 0.16 1731584 1724416 9437184 Reading the Data into R Once the GC log data had been cleansed, either by processing the first format with the shell script, or by processing the second format with the awk script, it was easy to read the data into R. g1gc.df = read.csv("summary.txt", row.names = NULL, stringsAsFactors=FALSE,sep="") str(g1gc.df) ## 'data.frame': 8307 obs. of 10 variables: ## $ row.names : chr "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ... ## $ SecondsSinceLaunch: num 1.16 1.47 1.97 3.83 6.1 ... ## $ IncrementalCount : int 0 1 2 3 4 5 6 7 8 9 ... ## $ FullCount : int 0 0 0 0 0 0 0 0 0 0 ... ## $ UserTime : num 0.11 0.05 0.04 0.21 0.08 0.26 0.31 0.33 0.34 0.56 ... ## $ SysTime : num 0.04 0.01 0.01 0.05 0.01 0.06 0.07 0.06 0.07 0.09 ... ## $ RealTime : num 0.02 0.02 0.01 0.04 0.02 0.04 0.05 0.04 0.04 0.06 ... ## $ BeforeSize : int 8192 5496 5768 22528 24576 43008 34816 53248 55296 93184 ... ## $ AfterSize : int 1400 1672 2557 4907 7072 14336 16384 18432 19456 21504 ... ## $ TotalSize : int 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 ... head(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount ## 1 2014-05-12T14:00:32.868-0700: 1.161 0 ## 2 2014-05-12T14:00:33.179-0700: 1.472 1 ## 3 2014-05-12T14:00:33.677-0700: 1.969 2 ## 4 2014-05-12T14:00:35.538-0700: 3.830 3 ## 5 2014-05-12T14:00:37.811-0700: 6.103 4 ## 6 2014-05-12T14:00:41.428-0700: 9.720 5 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 1 0 0.11 0.04 0.02 8192 1400 9437184 ## 2 0 0.05 0.01 0.02 5496 1672 9437184 ## 3 0 0.04 0.01 0.01 5768 2557 9437184 ## 4 0 0.21 0.05 0.04 22528 4907 9437184 ## 5 0 0.08 0.01 0.02 24576 7072 9437184 ## 6 0 0.26 0.06 0.04 43008 14336 9437184 Basic Statistics Once the data has been read into R, simple statistics are very easy to generate. All of the numbers from high school statistics are available via simple commands. For example, generate a summary of every column: summary(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount FullCount ## Length:8307 Min. : 1 Min. : 0 Min. : 0.0 ## Class :character 1st Qu.: 9977 1st Qu.:2048 1st Qu.: 0.0 ## Mode :character Median :12855 Median :4136 Median : 12.0 ## Mean :12527 Mean :4156 Mean : 31.6 ## 3rd Qu.:15758 3rd Qu.:6262 3rd Qu.: 61.0 ## Max. :55484 Max. :8391 Max. :113.0 ## UserTime SysTime RealTime BeforeSize ## Min. :0.040 Min. :0.0000 Min. : 0.0 Min. : 5476 ## 1st Qu.:0.470 1st Qu.:0.0300 1st Qu.: 0.1 1st Qu.:5137920 ## Median :0.620 Median :0.0300 Median : 0.1 Median :6574080 ## Mean :0.751 Mean :0.0355 Mean : 0.3 Mean :5841855 ## 3rd Qu.:0.920 3rd Qu.:0.0400 3rd Qu.: 0.2 3rd Qu.:7084032 ## Max. :3.370 Max. :1.5600 Max. :488.1 Max. :8696832 ## AfterSize TotalSize ## Min. : 1380 Min. :9437184 ## 1st Qu.:5002752 1st Qu.:9437184 ## Median :6559744 Median :9437184 ## Mean :5785454 Mean :9437184 ## 3rd Qu.:7054336 3rd Qu.:9437184 ## Max. :8482816 Max. :9437184 Q: What is the total amount of User CPU time spent in garbage collection? sum(g1gc.df$UserTime) ## [1] 6236 As you can see, less than two hours of CPU time was spent in garbage collection. Is that too much? To find the percentage of time spent in garbage collection, divide the number above by total_elapsed_time*CPU_count. In this case, there are a lot of CPU’s and it turns out the the overall amount of CPU time spent in garbage collection isn’t a problem when viewed in isolation. When calculating rates, i.e. events per unit time, you need to ask yourself if the rate is homogenous across the time period in the log file. Does the log file include spikes of high activity that should be separately analyzed? Averaging in data from nights and weekends with data from business hours may alias problems. If you have a reason to suspect that the garbage collection rates include peaks and valleys that need independent analysis, see the “Time Series” section, below. Q: How much garbage is collected on each pass? The amount of heap space that is recovered per GC pass is surprisingly low: At least one collection didn’t recover any data. (“Min.=0”) 25% of the passes recovered 3MB or less. (“1st Qu.=3072”) Half of the GC passes recovered 4MB or less. (“Median=4096”) The average amount recovered was 56MB. (“Mean=56390”) 75% of the passes recovered 36MB or less. (“3rd Qu.=36860”) At least one pass recovered 2GB. (“Max.=2121000”) g1gc.df$Delta = g1gc.df$BeforeSize - g1gc.df$AfterSize summary(g1gc.df$Delta) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0 3070 4100 56400 36900 2120000 Q: What is the maximum User CPU time for a single collection? The worst garbage collection (“Max.”) is many standard deviations away from the mean. The data appears to be right skewed. summary(g1gc.df$UserTime) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.040 0.470 0.620 0.751 0.920 3.370 sd(g1gc.df$UserTime) ## [1] 0.3966 Basic Graphics Once the data is in R, it is trivial to plot the data with formats including dot plots, line charts, bar charts (simple, stacked, grouped), pie charts, boxplots, scatter plots histograms, and kernel density plots. Histogram of User CPU Time per Collection I don't think that this graph requires any explanation. hist(g1gc.df$UserTime, main="User CPU Time per Collection", xlab="Seconds", ylab="Frequency") Box plot to identify outliers When the initial data is viewed with a box plot, you can see the one crazy outlier in the real time per GC. Save this data point for future analysis and drop the outlier so that it’s not throwing off our statistics. Now the box plot shows many outliers, which will be examined later, using times series analysis. Notice that the scale of the x-axis changes drastically once the crazy outlier is removed. par(mfrow=c(2,1)) boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(dominated by a crazy outlier)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") crazy.outlier.df=g1gc.df[g1gc.df$RealTime > 400,] g1gc.df=g1gc.df[g1gc.df$RealTime < 400,] boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(crazy outlier excluded)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") box(which = "outer", lty = "solid") Here is the crazy outlier for future analysis: crazy.outlier.df ## row.names SecondsSinceLaunch IncrementalCount ## 8233 2014-05-12T23:15:43.903-0700: 20741 8316 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 8233 112 0.55 0.42 488.1 8381440 8235008 9437184 ## Delta ## 8233 146432 R Time Series Data To analyze the garbage collection as a time series, I’ll use Z’s Ordered Observations (zoo). “zoo is the creator for an S3 class of indexed totally ordered observations which includes irregular time series.” require(zoo) ## Loading required package: zoo ## ## Attaching package: 'zoo' ## ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric head(g1gc.df[,1]) ## [1] "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" ## [3] "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ## [5] "2014-05-12T14:00:37.811-0700:" "2014-05-12T14:00:41.428-0700:" options("digits.secs"=3) times=as.POSIXct( g1gc.df[,1], format="%Y-%m-%dT%H:%M:%OS%z:") g1gc.z = zoo(g1gc.df[,-c(1)], order.by=times) head(g1gc.z) ## SecondsSinceLaunch IncrementalCount FullCount ## 2014-05-12 17:00:32.868 1.161 0 0 ## 2014-05-12 17:00:33.178 1.472 1 0 ## 2014-05-12 17:00:33.677 1.969 2 0 ## 2014-05-12 17:00:35.538 3.830 3 0 ## 2014-05-12 17:00:37.811 6.103 4 0 ## 2014-05-12 17:00:41.427 9.720 5 0 ## UserTime SysTime RealTime BeforeSize AfterSize ## 2014-05-12 17:00:32.868 0.11 0.04 0.02 8192 1400 ## 2014-05-12 17:00:33.178 0.05 0.01 0.02 5496 1672 ## 2014-05-12 17:00:33.677 0.04 0.01 0.01 5768 2557 ## 2014-05-12 17:00:35.538 0.21 0.05 0.04 22528 4907 ## 2014-05-12 17:00:37.811 0.08 0.01 0.02 24576 7072 ## 2014-05-12 17:00:41.427 0.26 0.06 0.04 43008 14336 ## TotalSize Delta ## 2014-05-12 17:00:32.868 9437184 6792 ## 2014-05-12 17:00:33.178 9437184 3824 ## 2014-05-12 17:00:33.677 9437184 3211 ## 2014-05-12 17:00:35.538 9437184 17621 ## 2014-05-12 17:00:37.811 9437184 17504 ## 2014-05-12 17:00:41.427 9437184 28672 Example of Two Benchmark Runs in One Log File The data in the following graph is from a different log file, not the one of primary interest to this article. I’m including this image because it is an example of idle periods followed by busy periods. It would be uninteresting to average the rate of garbage collection over the entire log file period. More interesting would be the rate of garbage collect in the two busy periods. Are they the same or different? Your production data may be similar, for example, bursts when employees return from lunch and idle times on weekend evenings, etc. Once the data is in an R Time Series, you can analyze isolated time windows. Clipping the Time Series data Flashing back to our test case… Viewing the data as a time series is interesting. You can see that the work intensive time period is between 9:00 PM and 3:00 AM. Lets clip the data to the interesting period:     par(mfrow=c(2,1)) plot(g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Complete Log File", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") clipped.g1gc.z=window(g1gc.z, start=as.POSIXct("2014-05-12 21:00:00"), end=as.POSIXct("2014-05-13 03:00:00")) plot(clipped.g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Limited to Benchmark Execution", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") box(which = "outer", lty = "solid") Cumulative Incremental and Full GC count Here is the cumulative incremental and full GC count. When the line is very steep, it indicates that the GCs are repeating very quickly. Notice that the scale on the Y axis is different for full vs. incremental. plot(clipped.g1gc.z[,c(2:3)], main="Cumulative Incremental and Full GC count", xlab="Time of Day", col="#1b9e77") GC Analysis of Benchmark Execution using Time Series data In the following series of 3 graphs: The “After Size” show the amount of heap space in use after each garbage collection. Many Java objects are still referenced, i.e. alive, during each garbage collection. This may indicate that the application has a memory leak, or may indicate that the application has a very large memory footprint. Typically, an application's memory footprint plateau's in the early stage of execution. One would expect this graph to have a flat top. The steep decline in the heap space may indicate that the application crashed after 2:00. The second graph shows that the outliers in real execution time, discussed above, occur near 2:00. when the Java heap seems to be quite full. The third graph shows that Full GCs are infrequent during the first few hours of execution. The rate of Full GC's, (the slope of the cummulative Full GC line), changes near midnight.   plot(clipped.g1gc.z[,c("AfterSize","RealTime","FullCount")], xlab="Time of Day", col=c("#1b9e77","red","#1b9e77")) GC Analysis of heap recovered Each GC trace includes the amount of heap space in use before and after the individual GC event. During garbage coolection, unreferenced objects are identified, the space holding the unreferenced objects is freed, and thus, the difference in before and after usage indicates how much space has been freed. The following box plot and bar chart both demonstrate the same point - the amount of heap space freed per garbage colloection is surprisingly low. par(mfrow=c(2,1)) boxplot(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", horizontal = TRUE, col="red") hist(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", breaks=100, col="red") box(which = "outer", lty = "solid") This graph is the most interesting. The dark blue area shows how much heap is occupied by referenced Java objects. This represents memory that holds live data. The red fringe at the top shows how much data was recovered after each garbage collection. barplot(clipped.g1gc.z[,c("AfterSize","Delta")], col=c("#7570b3","#e7298a"), xlab="Time of Day", border=NA) legend("topleft", c("Live Objects","Heap Recovered on GC"), fill=c("#7570b3","#e7298a")) box(which = "outer", lty = "solid") When I discuss the data in the log files with the customer, I will ask for an explaination for the large amount of referenced data resident in the Java heap. There are two are posibilities: There is a memory leak and the amount of space required to hold referenced objects will continue to grow, limited only by the maximum heap size. After the maximum heap size is reached, the JVM will throw an “Out of Memory” exception every time that the application tries to allocate a new object. If this is the case, the aplication needs to be debugged to identify why old objects are referenced when they are no longer needed. The application has a legitimate requirement to keep a large amount of data in memory. The customer may want to further increase the maximum heap size. Another possible solution would be to partition the application across multiple cluster nodes, where each node has responsibility for managing a unique subset of the data. Conclusion In conclusion, R is a very powerful tool for the analysis of Java garbage collection log files. The primary difficulty is data cleansing so that information can be read into an R data frame. Once the data has been read into R, a rich set of tools may be used for thorough evaluation.

    Read the article

  • Scroll Viewer not visible in wpf DataGrid

    - by cre-johnny07
    I have a datagrid in a grid but the scrollviewer is not visibile even though I made it auto. Below in my code. I can't figure out where's the problem. <Grid Grid.Row="0" Grid.Column="0"> <Grid.RowDefinitions> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto" ></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> <RowDefinition Height="Auto"></RowDefinition> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"></ColumnDefinition> <ColumnDefinition Width="Auto"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock Text="Doctor Name" Grid.Row="0" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Doctor Address" Grid.Row="1" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Entry Note" Grid.Row="2" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Join Date" Grid.Row="3" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Default Discount" Grid.Row="4" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Discount Valid Till" Grid.Row="5" Grid.Column="0" Margin="5,5,0,0"/> <TextBlock Text="Employee Name" Grid.Row="6" Grid.Column="0" Margin="5,5,0,0"/> <Grid Grid.Row="7" Grid.Column="0" Grid.ColumnSpan="2"> <Grid.ColumnDefinitions> <ColumnDefinition></ColumnDefinition> <ColumnDefinition></ColumnDefinition> <ColumnDefinition></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock Text="Report Type" Grid.Row="0" Grid.Column="0" Margin="5,5,0,0"/> <ComboBox Grid.Row="0" Grid.Column="1" Name="cmbReportType" Text="{Binding CurrentEntity.ReportType}"/> <Button Grid.Row="0" Grid.Column="2" Name="btnAddDetail" Content="Add Details" Command="{Binding AddDetailsCommand}"/> </Grid> <TextBox Grid.Row="0" Grid.Column="1" Margin="5,5,0,0" Width="190" Name="txtDocName" Text="{Binding CurrentEntity.RefName}"/> <TextBox Grid.Row="1" Grid.Column="1" Margin="5,5,0,0" Width="190" Height="75" Name="txtDocAddress" Text="{Binding CurrentEntity.RefAddress}"/> <TextBox Grid.Row="2" Grid.Column="1" Margin="5,5,0,0" Width="190" Height="100" Name="txtEntryNote" Text="{Binding CurrentEntity.EntryNotes}"/> <Custom:DatePicker Grid.Row="3" Grid.Column="1" Margin="5,3,0,0" Width="125" Name="dtpJoinDate" Height="24" HorizontalAlignment="Left" VerticalAlignment="Top" SelectedDate="{Binding CurrentEntity.DateStarted}" SelectedDateFormat="Short"/> <TextBox Grid.Row="4" Grid.Column="1" Height="25" Width="75" Name="txtDefaultDiscount" HorizontalAlignment="Left" Margin="5,0,0,0" VerticalAlignment="Top" Text="{Binding CurrentEntity.DefaultDiscount}"/> <Custom:DatePicker Grid.Row="5" Grid.Column="1" Margin="5,3,0,0" Width="125" Name="dtpValidTill" Height="24" HorizontalAlignment="Left" VerticalAlignment="Top" SelectedDate="{Binding CurrentEntity.DefaultDiscountValidTill}" SelectedDateFormat="Short"/> <ComboBox Grid.Row="6" Grid.Column="1" Margin="5,3,0,0" Width="190" Height="30" Name="cmbEmployeeName" ItemsSource="{Binding Employees}" DisplayMemberPath="FullName" SelectedIndex="{Binding SelecteIndex}"> </ComboBox> <Custom:DataGrid Grid.Row="8" Grid.Column="0" Grid.ColumnSpan="2" ItemsSource="{Binding XYZ}" AutoGenerateColumns="False" Name="grdTestDept"> <Custom:DataGrid.Columns> <Custom:DataGridTextColumn Binding="{Binding dep_id}" Width="40" Header="ID"/> <Custom:DataGridTextColumn Binding="{Binding dep_name}" Width="125" Header="Name"/> <Custom:DataGridTextColumn Binding="{Binding default_data}" Width="100" Header="Default Data"/> </Custom:DataGrid.Columns> </Custom:DataGrid> </Grid> <Grid Grid.Row="0" Grid.Column="1" Grid.RowSpan="9"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" MinWidth="43"></ColumnDefinition> <ColumnDefinition Width="Auto" MinWidth="150"></ColumnDefinition> <ColumnDefinition Width="Auto" MinWidth="50"></ColumnDefinition> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="34*" ></RowDefinition> <RowDefinition Height="337.88*"></RowDefinition> </Grid.RowDefinitions> <TextBlock Text="Name: " Grid.Row="0" Grid.Column="0" Margin="5,4,0,0" /> <cc:ValueEnabledCombo Grid.Column="1" x:Name="cmbfilEmployeeName" Width="150" Height="30" Margin="5,4,0,0" VerticalAlignment="Top" SelectedIndex="0" ItemsSource="{Binding Employees}" DisplayMemberPath="FullName" SelectedValuePath="EmployeeId" cc:ValueEnabledCombo.SelectionChanged="{Binding SelectionChangedCommand}"> </cc:ValueEnabledCombo> <Button Grid.Column="2" Name="btnReport" Width="50" Content="Report" Height="28" Margin="5,4,0,0" Command="{Binding ReportCommand}" VerticalAlignment="Top" /> <Grid Grid.Row="1" Grid.Column="0" Grid.ColumnSpan="3"> <Custom:DataGrid ItemsSource="{Binding DoctorList}" AutoGenerateColumns="False" Name="grdDoctor" ScrollViewer.HorizontalScrollBarVisibility="Auto" ScrollViewer.VerticalScrollBarVisibility="Auto"> <Custom:DataGrid.Columns> <Custom:DataGridTextColumn Binding="{Binding RefName}" Width="Auto" Header="Doctor Name"/> <Custom:DataGridTextColumn Binding="{Binding EmployeeFullName}" Width="Auto" Header="Employee Name"/> </Custom:DataGrid.Columns> </Custom:DataGrid> </Grid> </Grid> </Grid>

    Read the article

  • How to stop an IOException error using whilst using a combination of jython, pyro and ant?

    - by Kelso
    So the wonderful low down on this doozie of a problem: short version: We are building a distribution system for this item of software we're using. Basically we take out build artifact, store it on an ftp server which passes it to multiple clients which execute scripts to patch their servers. Long version: 1 distribution server multiple client servers software: jython 2.5.1, ant 1.8.0, pyro 3.10 The distribution server has an FTP server and a PYRO client running on it. Each client server has a PRYO server running on it. When the PYRO client is told to start the patch procedure then it reads a machine list which contains a list of all the client servers. Then connects to each of the PYRO servers one by one and execute the patch procedure. The procedure is: getPatch (gets the latest patch for that server), StopServer (stops the software that may or maynot be accessing what needs to be patched), Apply patch, StartServer. Each of the processes calls an ANT script that passes with some folder names and other config passes around. The fun part happens when you go to apply the patch. See below for error log. I had to remove the folder names because of NDA reasons. This is where it gets interesting. Running each section of the procedure individually. i.e. running getPatch, StopServer, etc. one at a time manually. This bug doesn't happen. Physically goign to the machine and running the processes it doesn't happen. Only when we call all 4 of the processes one after the other. It occurs during the ApplyPatch phase when an ANT replace script is called on multiple files. We think it might have something to do with the JVM keeping hold of the file for a split second or 2. however this is meant to have been patched according to the bug notes on ant. so in short: distribution server == jython == pyro connection == client server == jython == ant script Error Log: <*snip>\ant\deploy.xml:12: IOException in <*snip>\bin\startGs.sh - java.io.IOException:Failed to delete <*snip>\bin\rep4698373081723114968.tmp while trying to rename it. at org.apache.tools.ant.taskdefs.Replace.processFile(Replace.java:709) at org.apache.tools.ant.taskdefs.Replace.execute(Replace.java:548) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1360) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1212) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1360) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1212) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.Extaskdefs.SubAnt.execute(SubAnt.java:302) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at net.sf.antcontrib.logic.IfTask.execute(IfTask.java:197) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.TaskAdapter.execute(TaskAdapter.java:154) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1360) at org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38) at org.apache.tools.ant.Project.executeTargets(Project.java:1212) at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:441) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302) at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) it at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.taskdefs.Parallel$TaskRunnable.run(Parallel.java:433) at java.lang.Thread.run(Thread.java:619) Caused by: java.io.IOException: Failed to delete <*snip\bin\rep4698373081723114968.tmp while trying to rename it. at org.apache.tools.ant.util.FileUtils.rename(FileUtils.java:1248) at org.apache.tools.ant.taskdefs.Replace.processFile(Replace.java:702) ... 125 more Any help would be appreciated.

    Read the article

  • How to capture live camera frames in RGB with DirectShow

    - by Jonny Boy
    I'm implementing live video capture through DirectShow for live processing and display. (Augmented Reality app). I can access the pixels easily enough, but it seems I can't get the SampleGrabber to provide RGB data. The device (an iSight -- running VC++ Express in VMWare) only reports MEDIASUBTYPE_YUY2. After extensive Googling, I still can't figure out whether DirectShow is supposed to provide built-in color space conversion for this sort of thing. Some sites report that there is no YUV<-RGB conversion built in, others report that you just have to call SetMediaType on your ISampleGrabber with an RGB subtype. Any advice is greatly appreciated, I'm going nuts on this one. Code provided below. Please note that The code works, except that it doesn't provide RGB data I'm aware that I can implement my own conversion filter, but this is not feasible because I'd have to anticipate every possible device format, and this is a relatively small project // Playback IGraphBuilder *pGraphBuilder = NULL; ICaptureGraphBuilder2 *pCaptureGraphBuilder2 = NULL; IMediaControl *pMediaControl = NULL; IBaseFilter *pDeviceFilter = NULL; IAMStreamConfig *pStreamConfig = NULL; BYTE *videoCaps = NULL; AM_MEDIA_TYPE **mediaTypeArray = NULL; // Device selection ICreateDevEnum *pCreateDevEnum = NULL; IEnumMoniker *pEnumMoniker = NULL; IMoniker *pMoniker = NULL; ULONG nFetched = 0; HRESULT hr = CoInitializeEx(NULL, COINIT_MULTITHREADED); // Create CreateDevEnum to list device hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (PVOID *)&pCreateDevEnum); if (FAILED(hr)) goto ReleaseDataAndFail; // Create EnumMoniker to list devices hr = pCreateDevEnum->CreateClassEnumerator(CLSID_VideoInputDeviceCategory, &pEnumMoniker, 0); if (FAILED(hr)) goto ReleaseDataAndFail; pEnumMoniker->Reset(); // Find desired device while (pEnumMoniker->Next(1, &pMoniker, &nFetched) == S_OK) { IPropertyBag *pPropertyBag; TCHAR devname[256]; // bind to IPropertyBag hr = pMoniker-&gt;BindToStorage(0, 0, IID_IPropertyBag, (void **)&amp;pPropertyBag); if (FAILED(hr)) { pMoniker-&gt;Release(); continue; } VARIANT varName; VariantInit(&amp;varName); HRESULT hr = pPropertyBag-&gt;Read(L"DevicePath", &amp;varName, 0); if (FAILED(hr)) { pMoniker-&gt;Release(); pPropertyBag-&gt;Release(); continue; } char devicePath[DeviceInfo::STRING_LENGTH_MAX] = ""; wcstombs(devicePath, varName.bstrVal, DeviceInfo::STRING_LENGTH_MAX); if (strcmp(devicePath, deviceId) == 0) { // Bind Moniker to Filter pMoniker-&gt;BindToObject(0, 0, IID_IBaseFilter, (void**)&amp;pDeviceFilter); break; } pMoniker-&gt;Release(); pPropertyBag-&gt;Release(); } if (pDeviceFilter == NULL) goto ReleaseDataAndFail; // Create sample grabber IBaseFilter *pGrabberF = NULL; hr = CoCreateInstance(CLSID_SampleGrabber, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void**)&pGrabberF); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabberF->QueryInterface(IID_ISampleGrabber, (void**)&pGrabber); if (FAILED(hr)) goto ReleaseDataAndFail; // Create FilterGraph hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC, IID_IGraphBuilder, (LPVOID *)&pGraphBuilder); if (FAILED(hr)) goto ReleaseDataAndFail; // create CaptureGraphBuilder2 hr = CoCreateInstance(CLSID_CaptureGraphBuilder2, NULL, CLSCTX_INPROC, IID_ICaptureGraphBuilder2, (LPVOID *)&pCaptureGraphBuilder2); if (FAILED(hr)) goto ReleaseDataAndFail; // set FilterGraph hr = pCaptureGraphBuilder2->SetFiltergraph(pGraphBuilder); if (FAILED(hr)) goto ReleaseDataAndFail; // get MediaControl interface hr = pGraphBuilder->QueryInterface(IID_IMediaControl, (LPVOID *)&pMediaControl); if (FAILED(hr)) goto ReleaseDataAndFail; // Add filters hr = pGraphBuilder->AddFilter(pDeviceFilter, L"Device Filter"); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGraphBuilder->AddFilter(pGrabberF, L"Sample Grabber"); if (FAILED(hr)) goto ReleaseDataAndFail; // Set sampe grabber options AM_MEDIA_TYPE mt; ZeroMemory(&mt, sizeof(AM_MEDIA_TYPE)); mt.majortype = MEDIATYPE_Video; mt.subtype = MEDIASUBTYPE_RGB32; hr = pGrabber->SetMediaType(&mt); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabber->SetOneShot(FALSE); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabber->SetBufferSamples(TRUE); if (FAILED(hr)) goto ReleaseDataAndFail; // Get stream config interface hr = pCaptureGraphBuilder2->FindInterface(NULL, &MEDIATYPE_Video, pDeviceFilter, IID_IAMStreamConfig, (void **)&pStreamConfig); if (FAILED(hr)) goto ReleaseDataAndFail; int streamCapsCount = 0, capsSize, bestFit = -1, bestFitPixelDiff = 1000000000, desiredPixelCount = _width * _height, bestFitWidth = 0, bestFitHeight = 0; float desiredAspectRatio = (float)_width / (float)_height; hr = pStreamConfig->GetNumberOfCapabilities(&streamCapsCount, &capsSize); if (FAILED(hr)) goto ReleaseDataAndFail; videoCaps = (BYTE *)malloc(capsSize * streamCapsCount); mediaTypeArray = (AM_MEDIA_TYPE **)malloc(sizeof(AM_MEDIA_TYPE *) * streamCapsCount); for (int i = 0; i < streamCapsCount; i++) { hr = pStreamConfig->GetStreamCaps(i, &mediaTypeArray[i], videoCaps + capsSize * i); if (FAILED(hr)) continue; VIDEO_STREAM_CONFIG_CAPS *currentVideoCaps = (VIDEO_STREAM_CONFIG_CAPS *)(videoCaps + capsSize * i); int closestWidth = MAX(currentVideoCaps-&gt;MinOutputSize.cx, MIN(currentVideoCaps-&gt;MaxOutputSize.cx, width)); int closestHeight = MAX(currentVideoCaps-&gt;MinOutputSize.cy, MIN(currentVideoCaps-&gt;MaxOutputSize.cy, height)); int pixelDiff = ABS(desiredPixelCount - closestWidth * closestHeight); if (pixelDiff &lt; bestFitPixelDiff &amp;&amp; ABS(desiredAspectRatio - (float)closestWidth / (float)closestHeight) &lt; 0.1f) { bestFit = i; bestFitPixelDiff = pixelDiff; bestFitWidth = closestWidth; bestFitHeight = closestHeight; } } if (bestFit == -1) goto ReleaseDataAndFail; AM_MEDIA_TYPE *mediaType; hr = pStreamConfig->GetFormat(&mediaType); if (FAILED(hr)) goto ReleaseDataAndFail; VIDEOINFOHEADER *videoInfoHeader = (VIDEOINFOHEADER *)mediaType->pbFormat; videoInfoHeader->bmiHeader.biWidth = bestFitWidth; videoInfoHeader->bmiHeader.biHeight = bestFitHeight; //mediaType->subtype = MEDIASUBTYPE_RGB32; hr = pStreamConfig->SetFormat(mediaType); if (FAILED(hr)) goto ReleaseDataAndFail; pStreamConfig->Release(); pStreamConfig = NULL; free(videoCaps); videoCaps = NULL; free(mediaTypeArray); mediaTypeArray = NULL; // Connect pins IPin *pDeviceOut = NULL, *pGrabberIn = NULL; if (FindPin(pDeviceFilter, PINDIR_OUTPUT, 0, &pDeviceOut) && FindPin(pGrabberF, PINDIR_INPUT, 0, &pGrabberIn)) { hr = pGraphBuilder->Connect(pDeviceOut, pGrabberIn); if (FAILED(hr)) goto ReleaseDataAndFail; } else { goto ReleaseDataAndFail; } // start playing hr = pMediaControl->Run(); if (FAILED(hr)) goto ReleaseDataAndFail; hr = pGrabber->GetConnectedMediaType(&mt); // Set dimensions width = bestFitWidth; height = bestFitHeight; _width = bestFitWidth; _height = bestFitHeight; // Allocate pixel buffer pPixelBuffer = (unsigned *)malloc(width * height * 4); // Release objects pGraphBuilder->Release(); pGraphBuilder = NULL; pEnumMoniker->Release(); pEnumMoniker = NULL; pCreateDevEnum->Release(); pCreateDevEnum = NULL; return true;

    Read the article

  • xml to xsl transformation

    - by amirin
    Hi, I have a requirement to extract the uniquie values from the different nodes of the given xml using xsl transformation. XML FILE:- var IPClaimCausesNice = ""; function changeIPClaimCause(){ if(CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Diagnosis") { IPClaimCausesNice = "diagnosis" } if(CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Illness") { IPClaimCausesNice = "illness" } if(CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Accident") { IPClaimCausesNice = "accident" } return IPClaimCausesNice; } <section id="1903879316" name="Logos"> <fraglink id="605609862" resid="1235000151"> <argvalue name="CommJob"> <var name="CommJob" type="Th_1235001170_CommJob" /> </argvalue> </fraglink> </section> <section id="13483397" name="Address Block"> <fraglink id="563800610" resid="986000123"> <argvalue name="PersonInformation"> <var name="AddresseePersonInformation" type="Th_1235000929_PersonInformation" /> </argvalue> </fraglink> </section> <section id="1093480468" name="Details"> <fraglink id="460316501" resid="1195000163"> <argvalue name="currentDateTime"> <var name="getSystemVariables.getCurrentDate" type="date" /> </argvalue> <argvalue name="CommJob"> <var name="CommJob" type="Th_1235000929_CommJob" /> </argvalue> <argvalue name="ShowDOB" /> <argvalue name="ShowYourRef" /> <argvalue name="YourRefLabel" /> </fraglink> <fraglink id="1026044336" resid="1235000070"> <argvalue name="brandKey"> <var name="CommJob.commJobDetails.brandingKey" type="string" /> </argvalue> <argvalue name="brandSponsor"> <var name="CommJob.client.policy.policyDetails.brandSponsor" type="string" /> </argvalue> </fraglink> </section> <section id="2092948772" name="Important info"> <frag id="1180564368" name="frag" no-match="error" type="text"> <edition id="1178777425" name="Any" withdrawn="False"> <edition-content> <p style="bodyTableHeader" align="left" xml:space="preserve">Important information</p> <p style="body" xml:space="preserve">In accordance with the terms and conditions of your policy, your claim has been classified as a <iif><expression><script language="JavaScript">CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Diagnosis" || CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Ill health"</script><description>the IPClaimCause of the CommJob's client policy insurances Insurance Coverages IPCover Claim equals "Diagnosis" or the IPClaimCause of the CommJob's client policy insurances Insurance Coverages IPCover Claim equals "Ill health"sicknessCommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Accident"the IPClaimCause of the CommJob's client policy insurances Insurance Coverages IPCover Claim equals "Accident"accident. Please assist us Please quote your policy and claim numbers / when returning your forms. Your claim has been received Your Thank you for sending your Initial Claim Form which we received on . We are sorry to hear of your recent . Our assessmentThe attending doctor’s statement indicates you are claiming benefits as a result of , CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Diagnosis"the IPClaimCause of the CommJob's client policy Insurance Coverages first Coverage Claim equals "Diagnosis"which was diagnosedCommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Accident"the IPClaimCause of the CommJob's client policy Insurance Coverages first Coverage Claim equals "Accident"which first occuredCommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Ill health"the IPClaimCause of the CommJob's client policy Insurance Coverages first Coverage Claim equals "Ill health"with symptoms commencing on . We note you ceased all work on and consulted your doctor IsNotMissing(CommJob.placeHolders.date.date5)the date5 of the CommJob's placeHolders date is not missingon this day also regarding your condition. Further information is required We’ve enclosed the following forms. Please complete these and return them to us so that we can continue assessing your claim. Progress claim form Attached questionnaire Authority to the Health Insurance Commission Medical authority <<other/ free format>> <<other/ free format>> Please be advised we’ve CommJob.placeHolders._boolean.boolean1 == truethe CommJob's placeHolders boolean is boolean1also requested the following information: Medical report from Dr Medicare history report from the Health Insurance Commission <<other/ free format>> <<other/ free format>> As your claim forms have been submitted months after you ceased work, our ability to properly assess your claim may have been prejudiced. In order to complete our assessment of your claim, the following information is required within 30 days of this letter: Reason for late lodgement of claim. Reason(s) you ceased work on (eg redundancy or due to medical condition). Name and contact details of all doctors and specialists you have consulted since you ceased work. Copies of any medical, radiology, pathology or other reports in your possession. Details of all treatment you have received since you ceased work. Whether you returned to work (paid or unpaid) in either a full-time or part-time capacity. If so, please provide the dates you worked, hours you worked, duties you performed and any income you received. Financial information for any other related entities (if applicable). If you are unable to supply the above information, please contact us by WriteText(FormatDateTime(DateAdd(getSystemVariables.getCurrentDate,"day",30),"dd MMMM yyyy")). To help in the ongoing assessment of your claim, you are required to be under the regular care and attendance of a medical practitioner. We’ve enclosed a Progress Claim Form which needs to be completed and returned to us by . False False False CC: Different nodes to pick the values:- 1. 2.path form 5. Values to be picked form the xml node and display in HTML is like CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause CommJob.commJobDetails.stockType CommJob.commJobDetails.targetClient.targetClientName CommJob.client.policy.policyDetails.policyStatus CommJob.client.policy.policyDetails.productType CommJob.commJobDetails.targetClient.targetClientName ........etc can any one help me to provide the solution. This xsl transformation doesn't pick correctly only the values <xsl:template match="@*"/> Any help on this will great.

    Read the article

  • Having problem with C++ file handling

    - by caramel1991
    Our lecturer has given us a task,I've attempted it and try every single effort I can,but I still struggle with one of the problem in it,here goes the question: The company you work at receives a monthly report in a text format. The report contains the following information. • Department Name • Head of Department Name • Month • Minimum spending of the month • Maximum spending of the month Your program is to obtain the name of the input file from the user. Implement a structure to represent the data: Once the file has been read into your program, print out the following statistics for the user: • List which department has the minimum spending per month by month • List which department has the minimum spending by month by month Write the information into a file called “MaxMin.txt” Then do a processing so that the Department Name, Head of Department Name, Minimum spending and Maximum spending are written to separate files based on the month, eg Jan, Feb, March and so on. and of course our lecturer does send us a text file with the content: Engineering Bill Jan 2000 15000 IT Jack Jan 300 20000 HR Jill Jan 1500 10000 Engineering Bill Feb 5000 45000 IT Jack Feb 4500 7000 HR Jill Feb 5600 60000 Engineering Bill Mar 5000 45000 IT Jack Mar 4500 7000 HR Jill Mar 5600 60000 Engineering Bill Apr 5000 45000 IT Jack Apr 4500 7000 HR Jill Apr 5600 60000 Engineering Bill May 2000 15000 IT Jack May 300 20000 HR Jill May 1500 10000 Engineering Bill Jun 2000 15000 IT Jack Jun 300 20000 HR Jill Jun 1500 10000 and here's the c++ code I've written ue#include include include using namespace std; struct Record { string depName; string head; string month; float max; float min; string name; }myRecord[19]; int main () { string line; ofstream minmax,jan,feb,mar,apr,may,jun; char a[50]; char b[50]; int i = 0,j,k; float temp; //float maxjan=myRecord[0].max,maxfeb=myRecord[0].max,maxmar=myRecord[0].max,maxapr=myRecord[0].max,maxmay=myRecord[0].max,maxjune=myRecord[0].max; float minjan=myRecord[1].min,minfeb=myRecord[1].min,minmar=myRecord[1].min,minapr=myRecord[1].min,minmay=myRecord[1].min,minjune=myRecord[1].min; float maxjan=0,maxfeb=0,maxmar=0,maxapr=0,maxmay=0,maxjune=0; //float minjan=0,minfeb=0,minmar=0,minapr=0,minmay=0,minjune=0; string maxjanDep,maxfebDep,maxmarDep,maxaprDep,maxmayDep,maxjunDep; string minjanDep,minfebDep,minmarDep,minaprDep,minmayDep,minjunDep; cout<<"Enter file name: "; cina; ifstream myfile (a); //minmax.open ("MaxMin.txt"); if (myfile.is_open()){ while (! myfile.eof()){ myfilemyRecord[i].depNamemyRecord[i].headmyRecord[i].monthmyRecord[i].minmyRecord[i].max; cout << myRecord[i].depName<<"\t"< cout<<"Enter file name: "; cinb; ifstream myfile1 (b); minmax.open ("MaxMin.txt"); jan.open ("Jan.txt"); feb.open ("Feb.txt"); mar.open ("March.txt"); apr.open ("April.txt"); may.open ("May.txt"); jun.open ("Jun.txt"); if (myfile1.is_open()){ while (! myfile1.eof()){ myfile1myRecord[i].depNamemyRecord[i].headmyRecord[i].monthmyRecord[i].minmyRecord[i].max; if (myRecord[i].month == "Jan"){ jan<< myRecord[i].depName<<"\t"< if (maxjan< myRecord[i].max){ maxjan=myRecord[i].max; maxjanDep=myRecord[i].depName;} //if (minjan myRecord[i].min){ // minjan=myRecord[i].min; //minjanDep=myRecord[i].depName; //} for (k=1;k<=3;k++){ for (j=0;j<2;j++){ if (myRecord[j].minmyRecord[j+1].min){ temp=myRecord[j].min; myRecord[j].min=myRecord[j+1].min; myRecord[j+1].min=temp; minjanDep=myRecord[j].depName; }}} } if (myRecord[i].month == "Feb"){ feb<< myRecord[i].depName<<"\t"< //if (minfebmyRecord[i].min){ //minfeb=myRecord[i].min; //minfebDep=myRecord[i].depName; //} for (k=1;k<=3;k++){ for (j=0;j<2;j++){ if (myRecord[j].minmyRecord[j+1].min){ temp=myRecord[j].min; myRecord[j].min=myRecord[j+1].min; myRecord[j+1].min=temp; minfebDep=myRecord[j+1].depName; }}} } if (myRecord[i].month == "Mar"){ mar<< myRecord[i].depName<<"\t"< if (myRecord[i].month == "Apr"){ apr<< myRecord[i].depName<<"\t"< if (minaprmyRecord[i].min){ minapr=myRecord[i].min; minaprDep=myRecord[i].min;} } if (myRecord[i].month == "May"){ may< if (minmaymyRecord[i].min){ minmay=myRecord[i].min; minmayDep=myRecord[i].depName;} } if (myRecord[i].month == "Jun"){ jun<< myRecord[i].depName<<"\t"< if (minjunemyRecord[i].min){ minjune=myRecord[i].min; minjunDep=myRecord[i].depName;} } i++; myfile.close(); } minmax<<"department that has maximum spending at jan "< else{ cout << "Unable to open file"< } sorry inside that code ue#include should has iostream along with another two #include fstream and string,but at here it was treated as html tag,so i can't type it. my problem here is,I can't seems to get the minimum spending,I've try all I can but I'm still lingering on it,any idea??THANK YOU!

    Read the article

  • how to debug "deep" crashes in Android?

    - by eerok512
    Hi All, I've been trying to debug an android crash that is occurring without a Java Stack Trace... Java Stack Trace bugs are very easy for me to fix... but this bug I'm getting seems to be crashing inside the "NDK" or whatever it is the deep internals of Android are called... I've made no modifications to the NDK btw... I just dunno what else to call that layer hehe. Anyway I'm mainly looking for advice on deep-debug methods, rather than help with this specific problem... because I doubt I can post all the source code involved... so really I just need to know how to set breakpoints at the deep layers or whatever other methods there are to trace deep-crashes to their source... so I will briefly describe the bug and then post a LogCat. I have an app with 7 Activities Activity_INTRO Activity_EULA Activity_MAIN Activity_Contact Activity_News Activity_Library Activity_More INTRO is the initiating one... it fades in some company logos... after displaying them for a set time it jumps to the EULA activity... after the user accepts the EULA, it jumps to MAIN... MAIN then creates a TabHost and populates it with the 4 remaining activities now heres the thing... when I click on say, the More tab of the TabHost, the app pauses for a few seconds and then hard-crashes... no java stack trace, but an actual ASM level trace with the registers and IP and stack... the same thing occurs no matter which tab I select, Contact, News, Library, More... all of them crash with the same hard-crash if however I set the manifest to start the app at Activity_MAIN, bypassing the INTRO and EULA, then these crashes do not occur... so something is lingering from those opening activities that is somehow hosing the TabHost'ed Activities... and I'm wondering what the hell that could be... because I'm using finish() on those activites when they need to jump... in fact here is how I'm doing it let me know if you see any bugs: when jumping from INTRO to EULA I do: //Display the EULA Intent newIntent = new Intent (avi, Activity_EULA.class); startActivity (newIntent); finish(); and EULA to MAIN: Intent newIntent = new Intent (this, Activity_Main.class); startActivity (newIntent); finish(); anyway, here is the hard crash log... please let me know if there is some way I can reverse engineer either /system/lib/libcutils.so or /system/lib/libandroid_runtime.so, because I think the crash is happening in one of them... i think its happening in the libandroid_runtime in fact.... anyway on to the log: 12-25 00:56:07.322: INFO/DEBUG(551): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 12-25 00:56:07.332: INFO/DEBUG(551): Build fingerprint: 'generic/sdk/generic/:1.5/CUPCAKE/150240:eng/test-keys' 12-25 00:56:07.362: INFO/DEBUG(551): pid: 722, tid: 723 >>> com.killerapps.chokes <<< 12-25 00:56:07.362: INFO/DEBUG(551): signal 11 (SIGSEGV), fault addr 00000004 12-25 00:56:07.362: INFO/DEBUG(551): r0 00000004 r1 40021800 r2 00000004 r3 ad3296c5 12-25 00:56:07.372: INFO/DEBUG(551): r4 00000000 r5 00000000 r6 ad342da5 r7 41039fb8 12-25 00:56:07.372: INFO/DEBUG(551): r8 100ffcb0 r9 41039fb0 10 41e014a0 fp 00001071 12-25 00:56:07.382: INFO/DEBUG(551): ip ad35b874 sp 100ffc98 lr ad3296cf pc afb045a8 cpsr 00000010 12-25 00:56:07.552: INFO/DEBUG(551): #00 pc 000045a8 /system/lib/libcutils.so 12-25 00:56:07.572: INFO/DEBUG(551): #01 lr ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.582: INFO/DEBUG(551): stack: 12-25 00:56:07.582: INFO/DEBUG(551): 100ffc58 00000000 12-25 00:56:07.592: INFO/DEBUG(551): 100ffc5c 001c5278 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc60 000000da 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc64 0016c778 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc68 100ffcc8 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc6c 001c5278 [heap] 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc70 427d1ac0 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc74 000000c1 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc78 40021800 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc7c 000000c2 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc80 00000000 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc84 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc88 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc8c 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc90 df002777 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc94 e3a070ad 12-25 00:56:07.632: INFO/DEBUG(551): #00 100ffc98 00000000 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc9c ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.632: INFO/DEBUG(551): 100ffca0 100ffcd0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca4 ad342db5 /system/lib/libandroid_runtime.so 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca8 410a79d0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffcac ad00e3b8 /system/lib/libdvm.so 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb0 410a79d0 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb4 0016bac0 [heap] 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcb8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcbc 40021800 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc0 410a79d0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc4 afe39dd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc8 100ffcd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffccc ad040a8d /system/lib/libdvm.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd0 41039fb0 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd4 420000f8 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcdc 100ffd48 12-25 00:56:07.852: DEBUG/dalvikvm(722): GC freed 367 objects / 15144 bytes in 210ms 12-25 00:56:08.081: DEBUG/InetAddress(722): www.akillerapp.com: 74.86.47.202 (family 2, proto 6) 12-25 00:56:08.242: DEBUG/dalvikvm(722): GC freed 62 objects / 2328 bytes in 122ms 12-25 00:56:08.771: DEBUG/dalvikvm(722): GC freed 245 objects / 11744 bytes in 179ms 12-25 00:56:09.131: INFO/ActivityManager(577): Process com.killerapps.chokes (pid 722) has died. 12-25 00:56:09.171: INFO/WindowManager(577): WIN DEATH: Window{43719320 com.killerapps.chokes/com.killerapps.chokes.Activity_Main paused=false} 12-25 00:56:09.251: INFO/DEBUG(551): debuggerd committing suicide to free the zombie! 12-25 00:56:09.291: DEBUG/Zygote(553): Process 722 terminated by signal (11) 12-25 00:56:09.311: INFO/DEBUG(781): debuggerd: Jun 30 2009 17:00:51 12-25 00:56:09.331: WARN/InputManagerService(577): Got RemoteException sending setActive(false) notification to pid 722 uid 10020

    Read the article

  • Weird vps server issue

    - by anon-user0
    I have an unmanaged linux vps Ubuntu 11.10 (Oneiric Ocelot). I have LNMP installed. Also php-fpm php-apc, varnish, memcache. I have (or rather had) several live sites on it. under normal load the server uses ~700 mb memory. But since last night its using only 20mb~ memory and a lot of the services seems to be down (according to htop) I only see nginx working and mysql starts up and goes does every few minutes on a loop. Here are some information on the server that might help you help me: root@server:~# uname -a Linux server 2.6.18-308.el5.028stab099.3 #1 SMP Wed Mar 7 15:56:00 MSK 2012 i686 i686 i386 GNU/Linux - root@server:~# ifconfig -a lo Link encap:Local Loopback LOOPBACK MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12515 errors:0 dropped:0 overruns:0 frame:0 TX packets:9541 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:7191214 (7.1 MB) TX bytes:536726 (536.7 KB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:176.31.158.78 P-t-P:176.31.158.78 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 - root@server:~# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:http-alt *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp6 0 0 [::]:http-alt [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 9307368 @/com/ubuntu/upstart - htop: http://i.stack.imgur.com/NHKYX.png EDIT: Stressed. mind was not working adding log: root@server:~# less /var/log/syslog Jun 27 05:27:42 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 05:39:01 server CRON[9298]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 05:40:01 server CRON[9463]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 05:46:21 server sm-msp-queue[9480]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:19:14, xdelay=00:06:18, mailer=relay, pri=122407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 05:52:39 server sm-msp-queue[9480]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:06:32, xdelay=00:06:18, mailer=relay, pri=842407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:00:01 server CRON[15671]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:06:22 server sm-msp-queue[15690]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:39:15, xdelay=00:06:18, mailer=relay, pri=212407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:09:01 server CRON[18114]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 06:12:40 server sm-msp-queue[15690]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:26:33, xdelay=00:06:18, mailer=relay, pri=932407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:20:02 server CRON[21888]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:26:22 server sm-msp-queue[21907]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:59:15, xdelay=00:06:18, mailer=relay, pri=302407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:27:02 server CRON[24021]: (root) CMD (cd / && run-parts --report /etc/cron.hourly) Jun 27 06:32:40 server sm-msp-queue[21907]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:46:33, xdelay=00:06:18, mailer=relay, pri=1022407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:39:01 server CRON[27941]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 06:40:02 server CRON[28110]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:46:22 server sm-msp-queue[28125]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=01:19:15, xdelay=00:06:18, mailer=relay, pri=392407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:52:40 server sm-msp-queue[28125]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:06:33, xdelay=00:06:18, mailer=relay, pri=1112407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:52:40 server sm-msp-queue[28125]: q5QMk7S9009582: q5R2e4uo028125: sender notify: Warning: could not send message for past 4 hours Jun 27 06:52:44 server sm-msp-queue[28125]: q5R2e4uo028125: to=root, delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=33690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:00:02 server CRON[1543]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 07:06:21 server sm-msp-queue[1560]: q5R2e4uo028125: to=root, delay=00:13:41, xdelay=00:06:18, mailer=relay, pri=123690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:09:01 server CRON[3986]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 07:12:39 server sm-msp-queue[1560]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=01:45:32, xdelay=00:06:18, mailer=relay, pri=482407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:18:57 server sm-msp-queue[1560]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:32:50, xdelay=00:06:18, mailer=relay, pri=1202407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:20:02 server CRON[7760]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 07:26:22 server sm-msp-queue[7775]: q5R2e4uo028125: to=root, delay=00:33:42, xdelay=00:06:18, mailer=relay, pri=213690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:27:01 server CRON[9887]: (root) CMD (cd / && run-parts --report /etc/cron.hourly) Jun 27 07:32:40 server sm-msp-queue[7775]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=02:05:33, xdelay=00:06:18, mailer=relay, pri=572407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:38:58 server sm-msp-queue[7775]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:52:51, xdelay=00:06:18, mailer=relay, pri=1292407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:39:01 server CRON[13813]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth : root@server:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 20G 2.3G 18G 12% / - Jun 26 16:22:41 server varnishd[1413]: Child (32425) died signal=3 Jun 26 16:22:41 server varnishd[1413]: child (21687) Started Jun 26 16:22:41 server varnishd[1413]: Child (21687) said Child starts Jun 26 16:22:41 server varnishd[1413]: Child (21687) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Jun 26 16:34:28 server -- MARK -- Jun 26 16:54:29 server -- MARK -- Jun 26 17:14:29 server -- MARK -- Jun 26 17:34:29 server -- MARK -- Jun 26 17:54:29 server -- MARK -- Jun 26 18:14:29 server -- MARK -- Jun 26 18:34:29 server -- MARK -- Jun 26 18:54:29 server -- MARK -- Jun 26 19:14:29 server -- MARK -- Jun 26 19:34:29 server -- MARK -- Jun 26 19:54:29 server -- MARK -- Jun 26 20:14:29 server -- MARK -- Jun 26 20:34:29 server -- MARK -- Jun 26 20:48:12 server exiting on signal 15 Jun 26 20:51:58 server syslogd 1.5.0#6ubuntu1: restart. Jun 26 20:52:01 server varnishd[1324]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 26 21:11:58 server -- MARK -- Jun 26 21:31:58 server -- MARK -- Jun 26 21:51:58 server -- MARK -- Jun 26 22:11:58 server -- MARK -- Jun 26 22:31:58 server -- MARK -- Jun 26 22:51:58 server -- MARK -- Jun 26 23:11:58 server -- MARK -- Jun 26 23:31:58 server -- MARK -- Jun 26 23:51:58 server -- MARK -- Jun 27 00:11:58 server -- MARK -- Jun 27 00:23:42 server exiting on signal 15 Jun 27 02:21:10 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 02:21:12 server varnishd[1341]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 02:41:10 server -- MARK -- Jun 27 02:46:41 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:20:44 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:20:46 server varnishd[1238]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 03:20:46 server varnishd[1238]: child (1239) Started Jun 27 03:20:46 server varnishd[1238]: Child (1239) said Child starts Jun 27 03:20:46 server varnishd[1238]: Child (1239) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Jun 27 03:32:52 server exiting on signal 15 Jun 27 03:33:16 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:33:31 server varnishd[1372]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 03:53:16 server -- MARK -- Jun 27 04:13:16 server -- MARK -- Jun 27 04:33:16 server -- MARK -- Jun 27 04:53:16 server -- MARK -- Jun 27 05:13:16 server -- MARK -- Jun 27 05:27:42 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 05:53:17 server -- MARK -- Jun 27 06:13:17 server -- MARK -- Jun 27 06:33:17 server -- MARK -- Jun 27 06:53:17 server -- MARK -- Jun 27 07:13:17 server -- MARK -- Jun 27 07:33:17 server -- MARK -- Jun 27 07:53:17 server -- MARK -- Jun 27 08:13:17 server -- MARK -- Jun 27 08:33:17 server -- MARK -- Jun 27 08:53:17 server -- MARK -- Jun 27 09:13:17 server -- MARK -- Jun 27 09:33:17 server -- MARK -- Jun 27 09:53:17 server -- MARK -- Jun 27 10:13:17 server -- MARK -- Jun 27 10:33:17 server -- MARK -- Jun 27 10:53:17 server -- MARK -- Jun 27 11:13:17 server -- MARK -- Jun 27 11:33:17 server -- MARK -- Jun 27 11:53:18 server -- MARK -- Jun 27 12:13:18 server -- MARK -- Jun 27 12:33:18 server -- MARK -- Jun 27 12:53:18 server -- MARK -- Jun 27 13:13:18 server -- MARK -- Jun 27 13:33:18 server -- MARK -- Jun 27 13:53:18 server -- MARK -- Jun 27 14:13:18 server -- MARK -- Jun 27 14:33:18 server -- MARK -- Jun 27 14:53:18 server -- MARK -- -- root@server:~# cat /var/log/nginx/error.log 2012/06/27 03:32:54 [alert] 1199#0: worker process 1203 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1200 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1201 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1202 exited on signal 9 root@server:~# cat /var/log/nginx/access.log 31.210.99.87 - - [27/Jun/2012:09:09:08 +0400] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 172 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /cms/cmx.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /iesvc/iesvc.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /cmd2/index.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:09 +0400] "GET /cmd/index.jsp HTTP/1.1" 301 184 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:19 +0400] "GET / HTTP/1.1" 301 184 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5" 58.97.147.197 - - [27/Jun/2012:17:17:37 +0400] "GET / HTTP/1.1" 301 184 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5" 58.97.147.197 - - [27/Jun/2012:17:17:38 +0400] "-" 400 0 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:38 +0400] "-" 400 0 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:48 +0400] "-" 400 0 "-" "-" - root@server:~# cat /var/log/daemon.log Jun 26 20:48:10 server xinetd[1177]: Exiting... Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/daytime [file=/etc/xinetd.d/daytime] [line=28] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/discard [file=/etc/xinetd.d/discard] [line=26] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/echo [file=/etc/xinetd.d/echo] [line=25] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/time [file=/etc/xinetd.d/time] [line=26] Jun 26 20:51:58 server xinetd[1174]: removing chargen Jun 26 20:51:58 server xinetd[1174]: removing chargen Jun 26 20:51:58 server xinetd[1174]: removing daytime Jun 26 20:51:58 server xinetd[1174]: removing daytime Jun 26 20:51:58 server xinetd[1174]: removing discard Jun 26 20:51:58 server xinetd[1174]: removing discard Jun 26 20:51:58 server xinetd[1174]: removing echo Jun 26 20:51:58 server xinetd[1174]: removing echo Jun 26 20:51:58 server xinetd[1174]: removing time Jun 26 20:51:58 server xinetd[1174]: removing time Jun 26 20:51:58 server xinetd[1174]: xinetd Version 2.3.14 started with libwrap loadavg options compiled in. Jun 26 20:51:58 server xinetd[1174]: Started working: 0 available services Jun 26 20:52:01 server vnstatd[1330]: vnStat daemon 1.11 started. Jun 26 20:52:01 server vnstatd[1330]: Monitoring: venet0 Jun 27 00:23:41 server xinetd[1174]: Exiting... Jun 27 02:21:12 server vnstatd[1349]: vnStat daemon 1.11 started. Jun 27 02:21:12 server vnstatd[1349]: Monitoring: venet0 Jun 27 03:20:44 server xinetd[1166]: attribute: disable should not be in default section [file=/etc/xinetd.conf] [line=12] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/chargen [file=/etc/xinetd.conf] [line=15] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/daytime [file=/etc/xinetd.d/daytime] [line=28] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/discard [file=/etc/xinetd.d/discard] [line=26] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/echo [file=/etc/xinetd.d/echo] [line=25] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/time [file=/etc/xinetd.d/time] [line=26] Jun 27 03:20:44 server xinetd[1166]: removing chargen Jun 27 03:20:44 server xinetd[1166]: removing chargen Jun 27 03:20:44 server xinetd[1166]: removing daytime Jun 27 03:20:44 server xinetd[1166]: removing daytime Jun 27 03:20:44 server xinetd[1166]: removing discard Jun 27 03:20:44 server xinetd[1166]: removing discard Jun 27 03:20:44 server xinetd[1166]: removing echo Jun 27 03:20:44 server xinetd[1166]: removing echo Jun 27 03:20:44 server xinetd[1166]: removing time Jun 27 03:20:44 server xinetd[1166]: removing time Jun 27 03:20:44 server xinetd[1166]: xinetd Version 2.3.14 started with libwrap loadavg options compiled in. Jun 27 03:20:44 server xinetd[1166]: Started working: 0 available services Jun 27 03:20:46 server vnstatd[1249]: vnStat daemon 1.11 started. Jun 27 03:20:46 server vnstatd[1249]: Monitoring: venet0 Jun 27 03:32:41 server xinetd[1166]: Exiting... Jun 27 03:33:32 server vnstatd[1380]: vnStat daemon 1.11 started. Jun 27 03:33:32 server vnstatd[1380]: Monitoring: venet0 root@server:~# - Anything else you need let me know

    Read the article

  • iTunes 9.0.2 hangs on launch on Mac OS X 10.6.2

    - by dlamblin
    My iTunes 9.0.2 hangs on launch in OS X 10.6.2. This doesn't happen all the time, only if I've been running for a while. Then it will recur until I restart. Similarly Safari 4.0.4 will hang in the flash player plugin when about to play a video. If I restart both these problems go away until later. Based on this crash dump I am suspecting Audio Hijack Pro. I will try to install a newer version of the driver involved, but so far I haven't had much luck. I have uninstalled the Flash Plugin (10.0.r42 and r32) but clearly I want it in the long run. This is iTunes' crash report. Date/Time: 2009-12-14 19:56:02 -0500 OS Version: 10.6.2 (Build 10C540) Architecture: x86_64 Report Version: 6 Command: iTunes Path: /Applications/iTunes.app/Contents/MacOS/iTunes Version: 9.0.2 (9.0.2) Build Version: 2 Project Name: iTunes Source Version: 9022501 Parent: launchd [120] PID: 16878 Event: hang Duration: 3.55s (sampling started after 2 seconds) Steps: 16 (100ms sampling interval) Pageins: 5 Pageouts: 0 Process: iTunes [16878] Path: /Applications/iTunes.app/Contents/MacOS/iTunes UID: 501 Thread 8f96000 User stack: 16 ??? (in iTunes + 6633) [0x29e9] 16 ??? (in iTunes + 6843) [0x2abb] 16 ??? (in iTunes + 11734) [0x3dd6] 16 ??? (in iTunes + 44960) [0xbfa0] 16 ??? (in iTunes + 45327) [0xc10f] 16 ??? (in iTunes + 2295196) [0x23159c] 16 ??? (in iTunes + 103620) [0x1a4c4] 16 ??? (in iTunes + 105607) [0x1ac87] 16 ??? (in iTunes + 106442) [0x1afca] 16 OpenAComponent + 433 (in CarbonCore) [0x972e9dd0] 16 CallComponentOpen + 43 (in CarbonCore) [0x972ebae7] 16 CallComponentDispatch + 29 (in CarbonCore) [0x972ebb06] 16 DefaultOutputAUEntry + 319 (in CoreAudio) [0x70031117] 16 AUGenericOutputEntry + 15273 (in CoreAudio) [0x7000e960] 16 AUGenericOutputEntry + 13096 (in CoreAudio) [0x7000e0df] 16 AUGenericOutputEntry + 9628 (in CoreAudio) [0x7000d353] 16 ??? [0xe0c16d] 16 ??? [0xe0fdf8] 16 ??? [0xe0e1e7] 16 ahs_hermes_CoreAudio_init + 32 (in Instant Hijack Server) [0x13fc7e9] 16 semaphore_wait_signal_trap + 10 (in libSystem.B.dylib) [0x9798e922] Kernel stack: 16 semaphore_wait_continue + 0 [0x22a0a5] Thread 9b9eb7c User stack: 16 thread_start + 34 (in libSystem.B.dylib) [0x979bbe42] 16 _pthread_start + 345 (in libSystem.B.dylib) [0x979bbfbd] 16 ??? (in iTunes + 4011870) [0x3d475e] 16 CFRunLoopRun + 84 (in CoreFoundation) [0x993497a4] 16 CFRunLoopRunSpecific + 452 (in CoreFoundation) [0x99343864] 16 __CFRunLoopRun + 2079 (in CoreFoundation) [0x9934477f] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x9798e8da] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 9bc8b7c User stack: 16 start_wqthread + 30 (in libSystem.B.dylib) [0x979b4336] 16 _pthread_wqthread + 390 (in libSystem.B.dylib) [0x979b44f1] 16 _dispatch_worker_thread2 + 234 (in libSystem.B.dylib) [0x979b4a68] 16 _dispatch_queue_invoke + 163 (in libSystem.B.dylib) [0x979b4cc3] 16 kevent + 10 (in libSystem.B.dylib) [0x979b50ea] Kernel stack: 16 kevent + 97 [0x471745] Binary Images: 0x1000 - 0xbecfea com.apple.iTunes 9.0.2 (9.0.2) <1F665956-0131-39AF-F334-E29E510D42DA> /Applications/iTunes.app/Contents/MacOS/iTunes 0x13f6000 - 0x1402ff7 com.rogueamoeba.audio_hijack_server.hermes 2.2.2 (2.2.2) <9B29AE7F-6951-E63F-616A-482B62179A5C> /usr/local/hermes/modules/Instant Hijack Server.hermesmodule/Contents/MacOS/Instant Hijack Server 0x70000000 - 0x700cbffb com.apple.audio.units.Components 1.6.1 (1.6.1) <600769A2-479A-CA6E-A214-C8766F7CBD0F> /System/Library/Components/CoreAudio.component/Contents/MacOS/CoreAudio 0x97284000 - 0x975a3fe7 com.apple.CoreServices.CarbonCore 861.2 (861.2) <A9077470-3786-09F2-E0C7-F082B7F97838> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x9798e000 - 0x97b32feb libSystem.B.dylib ??? (???) <D45B91B2-2B4C-AAC0-8096-1FC48B7E9672> /usr/lib/libSystem.B.dylib 0x99308000 - 0x9947ffef com.apple.CoreFoundation 6.6.1 (550.13) <AE9FC6F7-F0B2-DE58-759E-7DB89C021A46> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation Process: AirPort Base Station Agent [142] Path: /System/Library/CoreServices/AirPort Base Station Agent.app/Contents/MacOS/AirPort Base Station Agent UID: 501 Thread 8b1d3d4 DispatchQueue 1 User stack: 16 ??? (in AirPort Base Station Agent + 5344) [0x1000014e0] 16 ??? (in AirPort Base Station Agent + 70666) [0x10001140a] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 8b80000 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 6e3c7a8 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 __workq_kernreturn + 10 (in libSystem.B.dylib) [0x7fff878869da] Kernel stack: 16 workqueue_thread_yielded + 562 [0x4cb6ae] Thread 8b0f3d4 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 select$DARWIN_EXTSN + 10 (in libSystem.B.dylib) [0x7fff878b09e2] Kernel stack: 16 sleep + 52 [0x487f93] Thread 8bcb000 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 ??? (in AirPort Base Station Agent + 71314) [0x100011692] 16 ??? (in AirPort Base Station Agent + 13712) [0x100003590] 16 ??? (in AirPort Base Station Agent + 71484) [0x10001173c] 16 __semwait_signal + 10 (in libSystem.B.dylib) [0x7fff878a79ee] Kernel stack: 16 semaphore_wait_continue + 0 [0x22a0a5] Binary Images: 0x100000000 - 0x100016fff com.apple.AirPortBaseStationAgent 1.5.4 (154.2) <73DF13C1-AF86-EC2C-9056-8D1946E607CF> /System/Library/CoreServices/AirPort Base Station Agent.app/Contents/MacOS/AirPort Base Station Agent 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: AppleSpell [3041] Path: /System/Library/Services/AppleSpell.service/Contents/MacOS/AppleSpell UID: 501 Thread 999a000 DispatchQueue 1 User stack: 16 ??? (in AppleSpell + 5852) [0x1000016dc] 16 ??? (in AppleSpell + 6508) [0x10000196c] 16 -[NSSpellServer run] + 72 (in Foundation) [0x7fff81d3b796] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 8a9e7a8 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Binary Images: 0x100000000 - 0x1000a9fef com.apple.AppleSpell 1.6.1 (61.1) <6DE57CC1-77A0-BC06-45E7-E1EACEBE1A88> /System/Library/Services/AppleSpell.service/Contents/MacOS/AppleSpell 0x7fff81cbc000 - 0x7fff81f3dfe7 com.apple.Foundation 6.6.1 (751.14) <767349DB-C486-70E8-7970-F13DB4CDAF37> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: autofsd [52] Path: /usr/libexec/autofsd UID: 0 Thread 79933d4 DispatchQueue 1 User stack: 16 ??? (in autofsd + 5340) [0x1000014dc] 16 ??? (in autofsd + 6461) [0x10000193d] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 75997a8 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Binary Images: 0x100000000 - 0x100001ff7 autofsd ??? (???) <29276FAC-AEA8-1520-5329-C75F9D453D6C> /usr/libexec/autofsd 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: blued [51] Path: /usr/sbin/blued UID: 0 Thread 7993000 DispatchQueue 1 User stack: 16 ??? (in blued + 5016) [0x100001398] 16 ??? (in blued + 152265) [0x1000252c9] 16 -[NSRunLoop(NSRunLoop) run] + 77 (in Foundation) [0x7fff81d07903] 16 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 270 (in Foundation) [0x7fff81d07a24] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 70db000 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 84d2000 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 select$DARWIN_EXTSN + 10 (in libSystem.B.dylib) [0x7fff878b09e2] Kernel stack: 16 sleep + 52 [0x487f93] Binary Images: 0x100000000 - 0x100044fff blued ??? (???) <ECD752C9-F98E-3052-26BF-DC748281C992> /usr/sbin/blued 0x7fff81cbc000 - 0x7fff81f3dfe7 com.apple.Foundation 6.6.1 (751.14) <767349DB-C486-70E8-7970-F13DB4CDAF37> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: check_afp [84504] Path: /System/Library/Filesystems/AppleShare/check_afp.app/Contents/MacOS/check_afp UID: 0 Thread 1140f000 DispatchQueue 1 User stack: 16 ??? (in check_afp + 5596) [0x1000015dc] 16 ??? (in check_afp + 12976) [0x1000032b0] 16 ??? (in check_afp + 6664) [0x100001a08] 16 ??? (in check_afp + 6520) [0x100001978] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 13ad8b7c DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 13ad6b7c User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 ??? (in check_afp + 13071) [0x10000330f] 16 mach_msg_server_once + 285 (in libSystem.B.dylib) [0x7fff878b2417] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 13ad87a8 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 select$DARWIN_EXTSN + 10 (in libSystem.B.dylib) [0x7fff878b09e2] Kernel stack: 16 sleep + 52 [0x487f93] Binary Images: 0x100000000 - 0x100004ff7 com.apple.check_afp 2.0 (2.0) <EE865A7B-8CDC-7649-58E1-6FE2B43F7A73> /System/Library/Filesystems/AppleShare/check_afp.app/Contents/MacOS/check_afp 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: configd [14] Path: /usr/libexec/configd UID: 0 Thread 704a3d4 DispatchQueue 1 User stack: 16 start + 52 (in configd) [0x100001488] 16 main + 2051 (in configd) [0x100001c9e] 16 server_loop + 72 (in configd) [0x1000024f4] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 6e70000 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 74a7b7c User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 plugin_exec + 1440 (in configd) [0x100003c5b] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 7560000 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 _io_pm_force_active_settings + 2266 (in PowerManagement) [0x10050f968] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 75817a8 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 select$DARWIN_EXTSN + 10 (in libSystem.B.dylib) [0x7fff878b09e2] Kernel stack: 16 sleep + 52 [0x487f93] Thread 8b1db7c User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 __workq_kernreturn + 10 (in libSystem.B.dylib) [0x7fff878869da] Kernel stack: 16 workqueue_thread_yielded + 562 [0x4cb6ae] Binary Images: 0x100000000 - 0x100026ff7 configd ??? (???) <58C02CBA-5556-4CDC-2763-814C4C7175DE> /usr/libexec/configd 0x10050c000 - 0x10051dfff com.apple.SystemConfiguration.PowerManagement 160.0.0 (160.0.0) <0AC3D2ED-919E-29C7-9EEF-629FBDDA6159> /System/Library/SystemConfiguration/PowerManagement.bundle/Contents/MacOS/PowerManagement 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: coreaudiod [114] Path: /usr/sbin/coreaudiod UID: 202 Thread 83b93d4 DispatchQueue 1 User stack: 16 ??? (in coreaudiod + 3252) [0x100000cb4] 16 ??? (in coreaudiod + 26505) [0x100006789] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 847e3d4 DispatchQueue 2 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 854c000 User stack: 3 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 3 __workq_kernreturn + 10 (in libSystem.B.dylib) [0x7fff878869da] Kernel stack: 3 workqueue_thread_yielded + 562 [0x4cb6ae] Binary Images: 0x100000000 - 0x10001ffef coreaudiod ??? (???) <A060D20F-A6A7-A3AE-84EC-11D7D7DDEBC6> /usr/sbin/coreaudiod 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: coreservicesd [66] Path: /System/Library/CoreServices/coreservicesd UID: 0 Thread 7994000 DispatchQueue 1 User stack: 16 ??? (in coreservicesd + 3756) [0x100000eac] 16 _CoreServicesServerMain + 522 (in CarbonCore) [0x7fff8327a972] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread 76227a8 User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 read + 10 (in libSystem.B.dylib) [0x7fff87877426] Kernel stack: 16 lo64_unix_scall + 77 [0x29e3fd] 16 unix_syscall64 + 617 [0x4ee947] 16 read_nocancel + 158 [0x496add] 16 write + 312 [0x49634d] 16 get_pathbuff + 3054 [0x3023db] 16 tsleep + 105 [0x4881ce] 16 wakeup + 786 [0x487da7] 16 thread_block + 33 [0x226fb5] 16 thread_block_reason + 331 [0x226f27] 16 thread_dispatch + 1950 [0x226c88] 16 machine_switch_context + 753 [0x2a5a37] Thread 7622b7c User stack: 16 thread_start + 13 (in libSystem.B.dylib) [0x7fff878a5e41] 16 _pthread_start + 331 (in libSystem.B.dylib) [0x7fff878a5f8e] 16 fmodWatchConsumer + 347 (in CarbonCore) [0x7fff8322f23f] 16 __semwait_signal + 10 (in libSystem.B.dylib) [0x7fff878a79ee] Kernel stack: 16 semaphore_wait_continue + 0 [0x22a0a5] Thread 79913d4 User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 244 (in libSystem.B.dylib) [0x7fff87887286] 16 _dispatch_queue_invoke + 185 (in libSystem.B.dylib) [0x7fff8788775c] 16 kevent + 10 (in libSystem.B.dylib) [0x7fff87885bba] Kernel stack: 16 kevent + 97 [0x471745] Thread 84d2b7c User stack: 16 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 16 __workq_kernreturn + 10 (in libSystem.B.dylib) [0x7fff878869da] Kernel stack: 16 workqueue_thread_yielded + 562 [0x4cb6ae] Thread 9b643d4 User stack: 15 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 15 __workq_kernreturn + 10 (in libSystem.B.dylib) [0x7fff878869da] Kernel stack: 16 workqueue_thread_yielded + 562 [0x4cb6ae] Binary Images: 0x100000000 - 0x100000fff coreservicesd ??? (???) <D804E55B-4376-998C-AA25-2ADBFDD24414> /System/Library/CoreServices/coreservicesd 0x7fff831cb000 - 0x7fff834fdfef com.apple.CoreServices.CarbonCore 861.2 (861.2) <39F3B259-AC2A-792B-ECFE-4F3E72F2D1A5> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x7fff86e3b000 - 0x7fff86faeff7 com.apple.CoreFoundation 6.6.1 (550.13) <1E952BD9-37C6-16BE-B2F0-CD92A6283D37> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: cron [31] Path: /usr/sbin/cron UID: 0 Thread 75acb7c DispatchQueue 1 User stack: 16 ??? (in cron + 2872) [0x100000b38] 16 ??? (in cron + 3991) [0x100000f97] 16 sleep + 61 (in libSystem.B.dylib) [0x7fff878f5090] 16 __semwait_signal + 10 (in libSystem.B.dylib) [0x7fff878a79ee] Kernel stack: 16 semaphore_wait_continue + 0 [0x22a0a5] Binary Images: 0x100000000 - 0x100006fff cron ??? (???) <3C5DCC7E-B6E8-1318-8E00-AB721270BFD4> /usr/sbin/cron 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: cvmsServ [104] Path: /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/cvmsServ UID: 0 Thread 761f3d4 DispatchQueue 1 User stack: 16 ??? (in cvmsServ + 4100) [0x100001004] 16 ??? (in cvmsServ + 23081) [0x100005a29] 16 mach_msg_server + 597 (in libSystem.B.dylib) [0x7fff878ea1c8] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Binary Images: 0x100000000 - 0x100008fff cvmsServ ??? (???) <6200AD80-4159-5656-8736-B72B7388C461> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/cvmsServ 0x7fff8786c000 - 0x7fff87a2aff7 libSystem.B.dylib ??? (???) <526DD3E5-2A8B-4512-ED97-01B832369959> /usr/lib/libSystem.B.dylib Process: DirectoryService [11] Path: /usr/sbin/DirectoryService UID: 0 Thread 70db7a8 DispatchQueue 1 User stack: 16 start + 52 (in DirectoryService) [0x10000da74] 16 main + 3086 (in DirectoryService) [0x10000e68a] 16 CFRunLoopRun + 70 (in CoreFoundation) [0x7fff86e859b6] 16 CFRunLoopRunSpecific + 575 (in CoreFoundation) [0x7fff86e85c2f] 16 __CFRunLoopRun + 1698 (in CoreFoundation) [0x7fff86e867a2] 16 mach_msg_trap + 10 (in libSystem.B.dylib) [0x7fff8786ce3a] Kernel stack: 16 ipc_mqueue_receive_continue + 0 [0x210aa3] Thread <multiple> DispatchQueue 6 User stack: 17 start_wqthread + 13 (in libSystem.B.dylib) [0x7fff87886a55] 17 _pthread_wqthread + 353 (in libSystem.B.dylib) [0x7fff87886bb8] 16 _dispatch_worker_thread2 + 231 (in libSystem.B.dylib) [0x7fff87887279] 16 _dispatch_call_block_and_release + 15 (in libSystem.B.dylib) [0x7fff878a8ce8] 16 syscall + 10 (in libSystem.B.dylib) [0x7fff878a92da] 1 _disp

    Read the article

  • SQLAuthority News – Interview with SQL Server MVP Madhivanan – A Real Problem Solver

    - by pinaldave
    Madhivanan (SQL Server MVP) is a real community hero. He is known for his two skills – 1) Help Community and 2) Help Community. I have met him many times and every time I feel if anybody in online world needs help Madhinvanan does his best to reach them out and solve problem. His name is not new if you are ready this blog or have ever asked a question in any online SQL forum. He is always there to help. When Madhivanan has time he even helps people on this blog as well. He spends his valuable time to help community only. He recently crossed over 1000 helpful comments on this blog. On that occasion, I have interviewed him to find out if he has any life outside SQL. Q 1. Tell us something about your self. I am Madhivanan ,an MSc computer Science graduate from Chennai, India and working as a Lead Analyst-Project at Ellaar Infotek Solutions Private Limited. I am basically a developer started with Visual Basic 6.0, SQL Server 2000 and Crystal Report 8. As years go on I started working more on writing queries in SQL Server in most of the projects developed in my company. I have some good level of knowledge in ORACLE, MySQL and PostgreSQL as well. Now I am leading a project develeoped in Windows Azure. Q 2. What motivates you to help people on community and forums. When I got some errors during the application development in my early days of my career, I got good solutions from online forums and weblogs. So I decided to help others if possible. When I visit forums and help people if I know the answer to the questions. I am one of the leading posters at www.sqlteam.com and also a moderator at www.sql-server-performance.com. I also take part in Visual Basic and Crystal Reports forums. I have been SQL Server MVP since 2007. Q 3. Your personal life is not much known. Tell us something about your personal life. I am happily married person. My wife is a B.Pharm graduate. I have a son who is now 18 months old. Q 4. Where can we read further for your community activity. I have a blog at http://beyondrelational.com/blogs/madhivanan where you can find most of my T-sql stuffs Q 5. When not working with SQL what do you do? When not working with SQL, I spend time playing with my son, reading some magazines and watching TV. Madhivanan for your work and help to community, a true salute to you. Hats off my friend. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Advanced Data Source Engine coming to Telerik Reporting Q1 2010

    This is the final blog post from the pre-release series. In it we are going to share with you some of the updates coming to our reporting solution in Q1 2010. A new Declarative Data Source Engine will be added to Telerik Reporting, that will allow full control over data management, and deliver significant gains in rendering performance and memory consumption. Some of the engines new features will be: Data source parameters - those parameters will be used to limit data retrieved from the data source to just the data needed for the report. Data source parameters are processed on the data source side, however only queried data is fetched to the reporting engine, rather than the full data source. This leads to lower memory consumption, because data operations are performed on queried data only, rather than on all data. As a result, only the queried data needs to be stored in the memory vs. the whole dataset, which was the case with the old approach Support for stored procedures - they will assist in achieving a consistent implementation of logic across applications, and are especially practical for performing repetitive tasks. A stored procedure stores the SQL statements and logic, which can then be executed in different reports and/or applications. Stored Procedures will not only save development time, but they will also improve performance, because each stored procedure is compiled on the data base server once, and then is reutilized. In Telerik Reporting, the stored procedure will also be parameterized, where elements of the SQL statement will be bound to parameters. These parameterized SQL queries will be handled through the data source parameters, and are evaluated at run time. Using parameterized SQL queries will improve the performance and decrease the memory footprint of your application, because they will be applied directly on the database server and only the necessary data will be downloaded on the middle tier or client machine; Calculated fields through expressions - with the help of the new reporting engine you will be able to use field values in formulas to come up with a calculated field. A calculated field is a user defined field that is computed "on the fly" and does not exist in the data source, but can perform calculations using the data of the data source object it belongs to. Calculated fields are very handy for adding frequently used formulas to your reports; Improved performance and optimized in-memory OLAP engine - the new data source will come with several improvements in how aggregates are calculated, and memory is managed. As a result, you may experience between 30% (for simpler reports) and 400% (for calculation-intensive reports) in rendering performance, and about 50% decrease in memory consumption. Full design time support through wizards - Declarative data sources are a great advance and will save developers countless hours of coding. In Q1 2010, and true to Telerik Reportings essence, using the new data source engine and its features requires little to no coding, because we have extended most of the wizards to support the new functionality. The newly extended wizards are available in VS2005/VS2008/VS2010 design-time. More features will be revealed on the product's what's new page when the new version is officially released in a few days. Also make sure you attend the free webinar on Thursday, March 11th that will be dedicated to the updates in Telerik Reporting Q1 2010. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • PowerShell Control over Nikon D3000 Camera

    My wife got me a Nikon D3000 camera for Christmas last year, and Im loving it but still trying to wrap my head around some of its features.  For instance, when you plug it into a computer via USB, it doesnt show up as a drive like most cameras Ive used to, but rather it shows up as Computer\D3000.  After a bit of research, Ive learned that this is because it implements the MTP/PTP protocol, and thus doesnt actually let Windows mount the cameras storage as a drive letter.  Nikon describes the use of the MTP and PTP protocols in their cameras here. What Im really trying to do is gain access to the cameras file system via PowerShell.  Ive been using a very handy PowerShell script to pull pictures off of my cameras and organize them into folders by date.  Id love to be able to do the same thing with my Nikon D3000, but so far I havent been able to figure out how to get access to the files in PowerShell.  If you know, Id appreciate any links/tips you can provide.  All I could find is a shareware product called PTPdrive, which Im not prepared to shell out money for (yet).  (and yes you can do much the same thing with Windows 7s Import Pictures and Videos wizard, which is pretty good too) However, in my searching, I did find some really cool stuff you can do with PowerShell and one of these cameras, like actually taking pictures via PowerShell commands.  Credit for this goes to James ONeill and Mark Wilson.  Heres what I was able to do: Taking Pictures via PowerShell with D3000 First, connect your camera, turn it on, and launch PowerShell.  Execute the following commands to see what commands your device supports.  $dialog = New-Object -ComObject "WIA.CommonDialog" $device = $dialog.ShowSelectDevice() $device.Commands You should see something like this: Now, to take a picture, simply point your camera at something and then execute this command: $device.ExecuteCommand("{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Imagine my surprise when this actually took a picture (with auto-focus): Imagine what you could do with a camera completely under the control of your computer  Time-lapse photography would be pretty simple, for instance, with a very simple loop that takes a picture and then sleeps for a minute (or whatever time period).  Hooked up to a laptop for portability (and an A/C power supply), this would be pretty trivial to implement.  I may have to give it a shot and report back. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • PSTN Trunk TDM400P Install on Asterisk / Trixbox

    - by Jona
    Hey All, I'm trying to get a TDM400P card with FXO module to connect to our PSTN line. The card is correctly detected by Linux: [trixbox1.localdomain asterisk]# lspci 00:09.0 Communication controller: Tiger Jet Network Inc. Tiger3XX Modem/ISDN interface I've run setup-pstn which produces the following output trixbox1.localdomain ~]# setup-pstn -------------------------------------------------------------- Detecting PSTN cards and USB PSTN Devices -------------------------------------------------------------- Hardware present! STOPPING ASTERISK Asterisk Stopped STOPPING FOP SERVER FOP Server Stopped Unloading DAHDI hardware modules: done Loading DAHDI hardware modules: wct4xxp: [ OK ] wcte12xp: [ OK ] wct1xxp: [ OK ] wcte11xp: [ OK ] wctdm24xxp: [ OK ] opvxa1200: [ OK ] wcfxo: [ OK ] wctdm: [ OK ] wcb4xxp: [ OK ] wctc4xxp: [ OK ] xpp_usb: [ OK ] Running dahdi_cfg: [ OK ] SETTING FILE PERMISSIONS Permissions OK STARTING ASTERISK Asterisk Started STARTING FOP SERVER FOP Server Started Chan Extension Context Language MOH Interpret Blocked State pseudo default en default In Service 1 from-pstn en default In Service dahdi_scan returns: dahdi_scan [1] active=yes alarms=OK description=Wildcard TDM400P REV I Board 5 name=WCTDM/4 manufacturer=Digium devicetype=Wildcard TDM400P REV I location=PCI Bus 00 Slot 10 basechan=1 totchans=4 irq=209 type=analog port=1,FXO port=2,none port=3,none port=4,none And asterisk can see the channel: > trixbox1*CLI> dahdi show channel 1 > Channel: 1LI> File Descriptor: 14 > Span: 11*CLI> Extension: I> Dialing: > noI> Context: from-pstn Caller ID: I> > Calling TON: 0 Caller ID name: > Mailbox: none Destroy: 0LI> InAlarm: > 1LI> Signalling Type: FXS Kewlstart > Radio: 0*CLI> Owner: <None> Real: > <None>> Callwait: <None> Threeway: > <None> Confno: -1LI> Propagated > Conference: -1 Real in conference: 0 > DSP: no1*CLI> Busy Detection: no TDD: > no1*CLI> Relax DTMF: no > Dialing/CallwaitCAS: 0/0 Default law: > ulaw Fax Handled: no Pulse phone: no > DND: no1*CLI> Echo Cancellation: > trixbox1128 taps trixbox1(unless TDM > bridged) currently OFF Actual > Confinfo: Num/0, Mode/0x0000 Actual > Confmute: No > Hookstate (FXS only): Onhook A cat of /etc/asterisk/dahdi.conf shows: [trixbox1.localdomain ~]# cat /etc/asterisk/dahdi-channels.conf ; Autogenerated by /usr/sbin/dahdi_genconf on Tue May 25 17:45:13 2010 ; If you edit this file and execute /usr/sbin/dahdi_genconf again, ; your manual changes will be LOST. ; Dahdi Channels Configurations (chan_dahdi.conf) ; ; This is not intended to be a complete chan_dahdi.conf. Rather, it is intended ; to be #include-d by /etc/chan_dahdi.conf that will include the global settings ; ; Span 1: WCTDM/4 "Wildcard TDM400P REV I Board 5" (MASTER) ;;; line="1 WCTDM/4/0 FXSKS (SWEC: MG2)" signalling=fxs_ks callerid=asreceived group=0 context=from-pstn channel => 1 callerid= group= context=default I have configured a "ZAP Trunk (DAHDI compatibility Mode)" with the ZAP identifier 1 and an outbound route, but when ever I try to make an external call via it I get the "All Circuits are busy now, please try your call again later message". I have one outbound route which uses the dial pattern 9|. and the Trunk Zap/1 and one Zap Trunk which uses Zap Identifier (trunk name): 1 and has no Dial Rules. The FXO module is directly connected to our phone line from BT via a BT-RJ11 cable. When running tail -f /var/log/asterisk/full and placing a call I get the following output: [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP RTP TOS bits 184 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP RTP CoS mark 5 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP VRTP TOS bits 136 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP VRTP CoS mark 6 [May 26 11:10:52] WARNING[2661] pbx.c: FONALITY: This thread has already held the conlock, skip locking [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:1] Macro("SIP/801-b7ce8c28", "user-callerid,SKIPTTL,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:1] Set("SIP/801-b7ce8c28", "AMPUSER=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:2] GotoIf("SIP/801-b7ce8c28", "0?report") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:3] ExecIf("SIP/801-b7ce8c28", "1?Set(REALCALLERIDNUM=801)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:4] Set("SIP/801-b7ce8c28", "AMPUSER=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:5] Set("SIP/801-b7ce8c28", "AMPUSERCIDNAME=Jona") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:6] GotoIf("SIP/801-b7ce8c28", "0?report") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:7] Set("SIP/801-b7ce8c28", "AMPUSERCID=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:8] Set("SIP/801-b7ce8c28", "CALLERID(all)="Jona" <801>") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:9] Set("SIP/801-b7ce8c28", "REALCALLERIDNUM=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:10] ExecIf("SIP/801-b7ce8c28", "0?Set(CHANNEL(language)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:11] GotoIf("SIP/801-b7ce8c28", "1?continue") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-user-callerid,s,20) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:20] NoOp("SIP/801-b7ce8c28", "Using CallerID "Jona" <801>") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:2] Set("SIP/801-b7ce8c28", "_NODEST=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:3] Macro("SIP/801-b7ce8c28", "record-enable,801,OUT,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:1] GotoIf("SIP/801-b7ce8c28", "1?check") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-record-enable,s,4) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:4] AGI("SIP/801-b7ce8c28", "recordingcheck,20100526-111052,1274868652.1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Launched AGI Script /var/lib/asterisk/agi-bin/recordingcheck [May 26 11:10:52] VERBOSE[2858] logger.c: recordingcheck,20100526-111052,1274868652.1: Outbound recording not enabled [May 26 11:10:52] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28>AGI Script recordingcheck completed, returning 0 [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:5] MacroExit("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:4] Macro("SIP/801-b7ce8c28", "dialout-trunk,1,01483890915,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:1] Set("SIP/801-b7ce8c28", "DIAL_TRUNK=1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:2] GosubIf("SIP/801-b7ce8c28", "0?sub-pincheck,s,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:3] GotoIf("SIP/801-b7ce8c28", "0?disabletrunk,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:4] Set("SIP/801-b7ce8c28", "DIAL_NUMBER=01483890915") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:5] Set("SIP/801-b7ce8c28", "DIAL_TRUNK_OPTIONS=tr") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:6] Set("SIP/801-b7ce8c28", "OUTBOUND_GROUP=OUT_1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:7] GotoIf("SIP/801-b7ce8c28", "1?nomax") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s,9) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:9] GotoIf("SIP/801-b7ce8c28", "0?skipoutcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:10] Set("SIP/801-b7ce8c28", "DIAL_TRUNK_OPTIONS=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:11] Macro("SIP/801-b7ce8c28", "outbound-callerid,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:1] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERPRES()=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:2] ExecIf("SIP/801-b7ce8c28", "0?Set(REALCALLERIDNUM=801)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:3] GotoIf("SIP/801-b7ce8c28", "1?normcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-outbound-callerid,s,6) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:6] Set("SIP/801-b7ce8c28", "USEROUTCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:7] Set("SIP/801-b7ce8c28", "EMERGENCYCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:8] Set("SIP/801-b7ce8c28", "TRUNKOUTCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:9] GotoIf("SIP/801-b7ce8c28", "1?trunkcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-outbound-callerid,s,12) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:12] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERID(all)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:13] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERID(all)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:14] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERPRES()=prohib_passed_screen)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:12] ExecIf("SIP/801-b7ce8c28", "0?AGI(fixlocalprefix)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:13] Set("SIP/801-b7ce8c28", "OUTNUM=01483890915") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:14] Set("SIP/801-b7ce8c28", "custom=DAHDI/1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:15] ExecIf("SIP/801-b7ce8c28", "0?Set(DIAL_TRUNK_OPTIONS=M(setmusic^))") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:16] Macro("SIP/801-b7ce8c28", "dialout-trunk-predial-hook,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk-predial-hook:1] MacroExit("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:17] GotoIf("SIP/801-b7ce8c28", "0?bypass,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:18] GotoIf("SIP/801-b7ce8c28", "0?customtrunk") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:19] Dial("SIP/801-b7ce8c28", "DAHDI/1/01483890915,300,") in new stack [May 26 11:10:52] WARNING[2858] app_dial.c: Unable to create channel of type 'DAHDI' (cause 0 - Unknown) [May 26 11:10:52] VERBOSE[2858] logger.c: == Everyone is busy/congested at this time (1:0/0/1) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:20] Goto("SIP/801-b7ce8c28", "s-CHANUNAVAIL,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s-CHANUNAVAIL,1) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s-CHANUNAVAIL@macro-dialout-trunk:1] GotoIf("SIP/801-b7ce8c28", "1?noreport") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s-CHANUNAVAIL,3) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s-CHANUNAVAIL@macro-dialout-trunk:3] NoOp("SIP/801-b7ce8c28", "TRUNK Dial failed due to CHANUNAVAIL (hangupcause: 0) - failing through to other trunks") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:5] Macro("SIP/801-b7ce8c28", "outisbusy,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outisbusy:1] Playback("SIP/801-b7ce8c28", "all-circuits-busy-now,noanswer") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28> Playing 'all-circuits-busy-now.ulaw' (language 'en') [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-outisbusy:2] Playback("SIP/801-b7ce8c28", "pls-try-call-later,noanswer") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28> Playing 'pls-try-call-later.ulaw' (language 'en') [May 26 11:10:54] WARNING[2661] pbx.c: FONALITY: This thread has already held the conlock, skip locking [May 26 11:10:54] VERBOSE[2858] logger.c: == Spawn extension (macro-outisbusy, s, 2) exited non-zero on 'SIP/801-b7ce8c28' in macro 'outisbusy' [May 26 11:10:54] VERBOSE[2858] logger.c: == Spawn extension (from-internal, 901483890915, 5) exited non-zero on 'SIP/801-b7ce8c28' [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [h@from-internal:1] Macro("SIP/801-b7ce8c28", "hangupcall") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:1] ResetCDR("SIP/801-b7ce8c28", "vw") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:2] NoCDR("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:3] GotoIf("SIP/801-b7ce8c28", "1?skiprg") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,6) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:6] GotoIf("SIP/801-b7ce8c28", "1?skipblkvm") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,9) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:9] GotoIf("SIP/801-b7ce8c28", "1?theend") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,11) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:11] Hangup("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: == Spawn extension (macro-hangupcall, s, 11) exited non-zero on 'SIP/801-b7ce8c28' in macro 'hangupcall' [May 26 11:10:55] VERBOSE[2858] logger.c: == Spawn extension (from-internal, h, 1) exited non-zero on 'SIP/801-b7ce8c28' I'm guessing I've missed a configuration step somewhere but no idea where, any help greatly appreciated.

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >