Search Results

Search found 24720 results on 989 pages for 'path info'.

Page 88/989 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

  • Cannot delete .Trash-503 directory, returns a $RECYCLE.BIN.trashinfo: Input/output error

    - by Parto
    I cannot delete .Trash-503 folder via GUI or terminal, it returns a $RECYCLE.BIN.trashinfo: Input/output error Not even sudo rm -r or even a simple ls works in that trash directory. Check terminal output below: subroot@subroot:~$ cd /media/xxxxx/ subroot@subroot:/media/xxxxx$ rm .Trash-503/ rm: cannot remove `.Trash-503/': Is a directory subroot@subroot:/media/xxxxx$ rm -r .Trash-503/ rm: cannot remove `.Trash-503/info/$RECYCLE.BIN.trashinfo': Input/output error rm: cannot remove `.Trash-503/info/found.000.trashinfo': Input/output error rm: cannot remove `.Trash-503/info': Directory not empty subroot@subroot:/media/xxxxx$ sudo rm -r .Trash-503/ [sudo] password for subroot: rm: cannot remove `.Trash-503/info/$RECYCLE.BIN.trashinfo': Input/output error rm: cannot remove `.Trash-503/info/found.000.trashinfo': Input/output error subroot@subroot:/media/xxxxx$ cd .Trash-503/ subroot@subroot:/media/xxxxx/.Trash-503$ ls info subroot@subroot:/media/xxxxx/.Trash-503$ cd info/ subroot@subroot:/media/xxxxx/.Trash-503/info$ ls ls: cannot access $RECYCLE.BIN.trashinfo: Input/output error ls: cannot access found.000.trashinfo: Input/output error found.000.trashinfo $RECYCLE.BIN.trashinfo subroot@subroot:/media/xxxxx/.Trash-503/info$ What's going on here and how can I delete this folder? EDIT I tried checking and repairing the partition using gparted only to get this error message: ERROR: Filesystem check failed! ERROR: 264 clusters are referenced multiple times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired. I don't have windows installed, how can I run chkdsk /f from ubuntu?

    Read the article

  • PHP PSR-0 + several namespaces in one file and autoload

    - by Nemoden
    I've been thinking for a while about defining several namespaces in one php file and so, having several classes inside this file. Suppose, I want to implement something like Doctrine\ORM\Query\Expr: Expr.php Expr |-- Andx.php |-- Base.php |-- Comparison.php |-- Composite.php |-- From.php |-- Func.php |-- GroupBy.php |-- Join.php |-- Literal.php |-- Math.php |-- OrderBy.php |-- Orx.php `-- Select.php It would be nice if I had all of this in one file - Expr.php: namespace Doctrine\ORM\Query; class Expr { // code } namespace Doctrine\ORM\Query\Expr; class Func { // code } // etc... What I'm thinking of is directories naming convention and, unlike PSR-0 having several classes and namespaces in one file. It's best explained by the code: ls Doctrine/orm/query Expr.php that's it - only Expr.php Since Expr.php is somewhat I call a "meta-namespace" for Expr\Func, it make sense to place all the classes inside Expr.php (as shown above). So, the vendor name is still starts with an uppercased letter (Doctrine) and the other parts of namespace start with lowercased letter. We can write an autoload so it would respect this notion: function load_class($class) { if (class_exists($class)) { return true; } $tokenized_path = explode(array("_", "\\"), DIRECTORY_SEPARATOR, $class); // array('Doctrine', 'orm', 'query', 'Expr', 'Func'); // ^^^^ // first, we are looking for first uppercased namespace part // and if it's not last (not the class name), we use it as a filename // and wiping away the rest to compose a path to a file we need to include if (FALSE !== ($meta_class_index = find_meta_class($tokenized_path))) { $new_tokenized_path = array_slice($tokenized_path, 0, $meta_class_index); $path_to_class = implode(DIRECTORY_SEPARATOR, $new_tokenized_path); } else { // no meta class found $path_to_class = implode(DIRECTORY_SEPARATOR, $tokenized_path); } if (file_exists($path_to_class.'.php')) { require_once $path_to_class.'.php'; } return false; } Another reason to do so is to reduce a number of php files scattered among directories. Usually you check file existence before you require a file to fail gracefully: file_exists($path_to_class.'.php'); If you take a look at actual Doctrine\ORM\Query\Expr code, you'll see they use all of the "inner-classes", so you actually do: file_exists("/path/to/Doctrine/ORM/Query/Expr.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/AndX.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Base.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Comparison.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Composite.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/From.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Func.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/GroupBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Join.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Literal.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Math.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/OrderBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Orx.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Select.php"); in your autoload which causes quite a few I/O reads. Isn't it too much to check on each user's hit? I'm just putting this on a discussion. I want to hear from another PHP programmers what do they think of it. And, of course, if you have a silver bullet addressing this problems I've designated here, please share. I also have been thinking if my vogue question fits here and according to the FAQ it seems like this question addresses "software architecture" problem slash proposal. I'm sorry if my scribble may seem a bit clunky :) Thanks.

    Read the article

  • How to create a clip region from a path that includes the 'outline'?

    - by mackenir
    I am creating a rounded rectangle GraphicsPath (see red outline image below), and then using this as a clip region both when drawing graphics, and as the Region of a Form. Unfortunately, although the path looks good, it doesn't work well as a region (see solid black image below) Is there a way that I can generate a clipping region from the path that includes all the 'outline' pixels of the path? Do I need to generate a bitmap and then process this to create a region? The rounded rectangle path: When used as a clip region: The discrepancy (red pixels are in the path outline, but outside the region. blue pixels are in both):

    Read the article

  • Will the app file be sync from dmgr side after we remove it from installedApps path in Websphere?

    - by wing2ofsky
    do you know whether the app file will be sync from dmgr side and regenerated after we remove it from installedApps path? i've got one issue recently from customer. That is, they uploaded one image file into WASNode installedApps app path manually. Afterwards, they removed that file manually again from installedApps app path. But after restarting the Application server process, that file has been regenerated under same installedApps path. So i suspect that file maybe has been resync from dmgr node, like app file under applications folder. However, first of all, i don't see that image file within application ear file from DMGR applications folder. Moreover, i made a test myself, if i deleted file from installedApps app path, that file never be regenerated any more even though the node sync completed. So does anybody know why? Thanks in advance

    Read the article

  • How to check wether a path represented by a QString with german umlauts exists?

    - by MB
    Hey, i get a QString which represents a directory from a QLineEdit. Now i want to check wether a certain file exists in this directory. But if i try this with os.path.exists and os.path.join and get in trouble when german umlauts occur in the directory path: #the direcory coming from the user input in the QLineEdit #i take this QString to the local 8-Bit encoding and then make #a string from it target_dir = str(lineEdit.text().toLocal8Bit()) #the file name that should be checked for file_name = 'some-name.txt' #this fails with a UnicodeDecodeError when a umlaut occurs in target_dir os.path.exists(os.path.join(target_dir, file_name)) How would you check if the file exists, when you might encounter german umlauts?

    Read the article

  • SQL Server Manageability Series: how to change the default path of .cache files of a data collector? #sql #mdw #dba

    - by ssqa.net
    How to change the default path of .cache files of a data collector after the Management Data Warehouse (MDW has been setup? This was the question asked by one of the DBAs in a client's place, instantly I enquired that were there any folder specified while setting up the MDW and obvious answer was no as there were left default. This means all the .CACHE files are stored under %C\TEMP directory which may post out of disk space problem on the server where the MDW is setup to collect. Going back...(read more)

    Read the article

  • WIM2VHD failing with "Cannot derive Volume GUID from mount point."

    - by Jacob
    I'm trying to use WIM2VHD according to the instructions on Scott Hanselman's blog post to create a Sysprepped VHD image to boot from. I've installed the WAIK, and I have my Windows 7 sources mounted as a virtual drive. When I try to run WIM2VHD like this: cscript WIM2VHD.wsf /wim:F:\sources\install.wim /sku:Ultimate /vhd:E:\WindowsSeven.vhd /size:30721 I get the following log: Log for WIM2VHD 6.1.7600.0 on 11/2/2009 at 10:51:18.16 Copyright (C) Microsoft Corporation. All rights reserved. MACHINE INFO: Build=7600 Platform=x86fre OS=Windows 7 Ultimate ServicePack= Version=6.1 BuildLab=win7_rtm BuildDate=090713-1255 Language=en-ZA INFO: Looking for IMAGEX.EXE... INFO: Looking for BCDBOOT.EXE... INFO: Looking for BCDEDIT.EXE... INFO: Looking for REG.EXE... INFO: Looking for DISKPART.EXE... INFO: Session key is E01E1ED7-C197-4814-BDE4-43B73E14FCC4 INFO: Inspecting the WIM... INFO: Configuring and formatting the VHD... ******************************************************************************* Error: 0: Cannot derive Volume GUID from mount point. ******************************************************************************* INFO: Unmounting the VHD due to error... WARNING: In order to help resolve the issue, temporary files have not been deleted. They are in: C:\Users\Jacob\AppData\Local\Temp\WIM2VHD.WSF\E01E1ED7-C197-4814-BDE4-43B73E14FCC4 *emphasized text*Summary: Errors: 1, Warnings: 1, Successes: 0 INFO: Done. Any ideas?

    Read the article

  • how do i write an init script for django-supervisor

    - by amateur
    pardon me as this is my first time attempting to write a init script for centos 5. I am using django + supervisor to manage my celery workers, scheduler. Now, this is my naive simple attempt /etc/init.d/supervisor #!/bin/sh # # /etc/rc.d/init.d/supervisord # # Supervisor is a client/server system that # allows its users to monitor and control a # number of processes on UNIX-like operating # systems. # # chkconfig: - 64 36 # description: Supervisor Server # processname: supervisord # Source init functions /home/foo/virtualenv/property_env/bin/python /home/foo/bar/manage.py supervisor --daemonize inside my supervisor.conf: [program:celerybeat] command=/home/property/virtualenv/property_env/bin/python manage.py celerybeat --loglevel=INFO --logfile=/home/property/property_buyer/logfiles/celerybeat.log [program:celeryd] command=/home/foo/virtualenv/property_env/bin/python manage.py celeryd --loglevel=DEBUG --logfile=/home/foo/bar/logfiles/celeryd.log --concurrency=1 -E [program:celerycam] command=/home/foo/virtualenv/property_env/bin/python manage.py celerycam I couldn't get it to work. 2013-08-06 00:21:03,108 INFO exited: celerybeat (exit status 2; not expected) 2013-08-06 00:21:06,114 INFO spawned: 'celeryd' with pid 11772 2013-08-06 00:21:06,116 INFO spawned: 'celerycam' with pid 11773 2013-08-06 00:21:06,119 INFO spawned: 'celerybeat' with pid 11774 2013-08-06 00:21:06,146 INFO exited: celerycam (exit status 2; not expected) 2013-08-06 00:21:06,147 INFO gave up: celerycam entered FATAL state, too many start retries too quickly 2013-08-06 00:21:06,147 INFO exited: celeryd (exit status 2; not expected) 2013-08-06 00:21:06,152 INFO gave up: celeryd entered FATAL state, too many start retries too quickly 2013-08-06 00:21:06,152 INFO exited: celerybeat (exit status 2; not expected) 2013-08-06 00:21:07,153 INFO gave up: celerybeat entered FATAL state, too many start retries too quickly I believe it is the init script, but please help me understand what is wrong.

    Read the article

  • Server stops responding, can't find issue?

    - by Corey W
    I've had a pretty basic server up and running CentOS with webserver/database, and have noticed that it has locked up a few times in the middle of the night. It seems to happen randomly. When it locks up I can ssh in, (although it seems to hang once connected), but can't access cpanel/whm and have to reboot the server to get everything back up. Checking the messages log I see the below like clockwork every 5minutes 1 second, and then it just stops logging anything until I reboot. I can't seem to find any log showing any issue? Is there somewhere I can check to try to figure out what is happening? Could this be caused by CPU being maxed? Nov 17 08:01:35 s1 pure-ftpd: (__cpanel__service__auth__ftpd__Q13SKrtaCJCHjBezTfU8Iqmsi@127.0.0.1) [INFO] Logout. Nov 17 08:06:36 s1 pure-ftpd: ([email protected]) [INFO] New connection from 127.0.0.1 Nov 17 08:06:36 s1 pure-ftpd: ([email protected]) [INFO] __cpanel__service__auth__ftpd__mxidFBSnQXmR0QzqSxlqrXLIH0CmJ0GPh9bZ5V3 is now l ogged in Nov 17 08:06:37 s1 pure-ftpd: (__cpanel__service__auth__ftpd__mxidBDaCgnqSxlqrXLIH0CmJ0GPh9bZ5V3@127.0.0.1) [INFO] Logout. Nov 17 08:11:37 s1 pure-ftpd: ([email protected]) [INFO] New connection from 127.0.0.1 Nov 17 08:11:38 s1 pure-ftpd: ([email protected]) [INFO] __cpanel__service__auth__ftpd__T4B7F71acf1dsdJSeJHdqKNcbOdpzNnN_GttgcM is now l ogged in Nov 17 08:11:38 s1 pure-ftpd: (__cpanel__service__auth__ftpd__T4B7F71acf1KNcbOdpzNnN_GttgcM@127.0.0.1) [INFO] Logout. Nov 17 08:16:38 s1 pure-ftpd: ([email protected]) [INFO] New connection from 127.0.0.1 Nov 17 08:16:38 s1 pure-ftpd: ([email protected]) [INFO] __cpanel__service__auth__ftpd__W5C1RzumtaNwe4cU8Lt1 is now logged in Nov 17 08:16:38 s1 pure-ftpd: ([email protected]) [INFO] Logout. Nov 17 09:10:58 s1 kernel: imklog 4.6.2, log source = /proc/kmsg started. Nov 17 09:10:58 s1 rsyslogd: [origin software="rsyslogd" swVersion="4.6.2" x-pid="1094" x-info="http://www.rsyslog.com"] (re)start Nov 17 09:10:58 s1 kernel: Initializing cgroup subsys cpuset

    Read the article

  • How to turn off Tomcat logging in Eclipse?

    - by kirdie
    I develop a Vaadin project in Eclipse that I start through Tomcat 6 which gets started directly by Eclipse. Tomcat prints an enormous amount of log messages though on each start which makes it hard to see the output of my own Application. I have already replaced all log levels in tomcat6/conf/logging.properties by WARNING (e.g. java.util.logging.ConsoleHandler.level = WARNING) but I still get many INFO messages. How can I turn this off or restrict the log messages to WARNING? An example of the messages Okt 26, 2012 12:16:36 PM org.apache.catalina.core.AprLifecycleListener init INFO: Loaded APR based Apache Tomcat Native library 1.1.24. Okt 26, 2012 12:16:36 PM org.apache.catalina.core.AprLifecycleListener init INFO: APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true]. Okt 26, 2012 12:16:36 PM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.j2ee.server:saim' did not find a matching property. Okt 26, 2012 12:16:37 PM org.apache.coyote.http11.Http11AprProtocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Okt 26, 2012 12:16:37 PM org.apache.coyote.ajp.AjpAprProtocol init INFO: Initializing Coyote AJP/1.3 on ajp-8009 Okt 26, 2012 12:16:37 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 879 ms Okt 26, 2012 12:16:37 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Okt 26, 2012 12:16:37 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.32 Okt 26, 2012 12:16:37 PM org.apache.coyote.http11.Http11AprProtocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Okt 26, 2012 12:16:37 PM org.apache.coyote.ajp.AjpAprProtocol start INFO: Starting Coyote AJP/1.3 on ajp-8009 Okt 26, 2012 12:16:37 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 568 ms

    Read the article

  • After connecting wlan0 to bridge interface (and then removing it), can't connect to AP

    - by gmonk
    I'm on a laptop running Debian Jessie with kernel 3.13-1-amd64; lspci shows that my wireless NIC + driver is 04:00.0 Network controller: Intel Corporation Wireless 3160 (rev 83) Subsystem: Intel Corporation Dual Band Wireless-AC 3160 Kernel driver in use: iwlwifi This has been working without any problems, until I tried creating a bridge for lxc containers to use. I did the same thing as this person here: How-to set up a network bridge on a laptop for LXC use? -- and ended up having the same problem as this poster did, so I decided to "undo" my actions. This hasn't been successful. Actions taken so far: To configure the bridge: #> ip link add type veth #> iw dev wlan0 set 4addr on #> ifconfig veth0 up #> brctl addbr br0 #> brctl addif br0 wlan0 #> brctl addif br0 veth0 #> ifconfig br0 192.168.0.4/24 #> ifconfig wlan0 0.0.0.0 To "deconfigure": #> brctl delif br0 wlan0 #> brctl delif br0 veth0 #> iw dev wlan0 set 4addr off #> ifconfig veth0 down #> ifconfig wlan0 down #> ifconfig br0 down #> brctl delbr br0 Now, dmesg and /var/log/syslog show repeated attempts at connecting to the AP that was working before, which fail after authentication: May 27 09:16:01 myhostname kernel: [11350.757172] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:01 myhostname kernel: [11350.759036] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> authenticating May 27 09:16:01 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:01 myhostname kernel: [11350.762615] wlan0: authenticated May 27 09:16:01 myhostname kernel: [11350.762753] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:01 myhostname kernel: [11350.762755] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:01 myhostname kernel: [11350.765080] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: authenticating -> associating May 27 09:16:01 myhostname kernel: [11350.767474] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=12 aid=0) May 27 09:16:01 myhostname kernel: [11350.767476] wlan0: 00:18:f8:54:a3:d6 denied association (code=12) May 27 09:16:01 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-ASSOC-REJECT bssid=00:18:f8:54:a3:d6 status_code=12 May 27 09:16:01 myhostname kernel: [11350.788475] wlan0: deauthenticating from 00:18:f8:54:a3:d6 by local choice (reason=3) May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> disconnected May 27 09:16:01 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:02 myhostname dhclient: DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 14 May 27 09:16:04 myhostname wpa_supplicant[8946]: wlan0: SME: Trying to authenticate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:04 myhostname kernel: [11354.559579] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:04 myhostname kernel: [11354.561458] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:04 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:04 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> associating May 27 09:16:04 myhostname kernel: [11354.563445] wlan0: authenticated May 27 09:16:04 myhostname kernel: [11354.563631] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:04 myhostname kernel: [11354.563633] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:04 myhostname kernel: [11354.565727] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:04 myhostname wpa_supplicant[8946]: wlan0: Associated with 00:18:f8:54:a3:d6 May 27 09:16:04 myhostname kernel: [11354.568091] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=0 aid=9) May 27 09:16:04 myhostname kernel: [11354.569030] wlan0: associated May 27 09:16:04 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> associated May 27 09:16:05 myhostname kernel: [11354.978204] wlan0: deauthenticated from 00:18:f8:54:a3:d6 (Reason: 15) May 27 09:16:05 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-DISCONNECTED bssid=00:18:f8:54:a3:d6 reason=15 May 27 09:16:05 myhostname kernel: [11354.992729] cfg80211: Calling CRDA to update world regulatory domain May 27 09:16:05 myhostname kernel: [11354.995004] cfg80211: World regulatory domain updated: May 27 09:16:05 myhostname kernel: [11354.995005] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) May 27 09:16:05 myhostname kernel: [11354.995006] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995007] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995007] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995008] cfg80211: (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995009] cfg80211: (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:05 myhostname kernel: [11354.995010] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm) May 27 09:16:05 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associated -> disconnected May 27 09:16:05 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:09 myhostname wpa_supplicant[8946]: wlan0: SME: Trying to authenticate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:09 myhostname kernel: [11358.763968] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:09 myhostname kernel: [11358.765796] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> authenticating May 27 09:16:09 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:09 myhostname kernel: [11358.769957] wlan0: authenticated May 27 09:16:09 myhostname kernel: [11358.770102] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:09 myhostname kernel: [11358.770104] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:09 myhostname kernel: [11358.770846] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:09 myhostname kernel: [11358.773358] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=12 aid=0) May 27 09:16:09 myhostname kernel: [11358.773361] wlan0: 00:18:f8:54:a3:d6 denied association (code=12) May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: authenticating -> associating May 27 09:16:09 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-ASSOC-REJECT bssid=00:18:f8:54:a3:d6 status_code=12 May 27 09:16:09 myhostname kernel: [11358.802187] wlan0: deauthenticating from 00:18:f8:54:a3:d6 by local choice (reason=3) May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> disconnected May 27 09:16:09 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:12 myhostname wpa_supplicant[8946]: wlan0: SME: Trying to authenticate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:12 myhostname kernel: [11362.573442] wlan0: authenticate with 00:18:f8:54:a3:d6 May 27 09:16:12 myhostname kernel: [11362.575270] wlan0: send auth to 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:12 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> authenticating May 27 09:16:12 myhostname wpa_supplicant[8946]: wlan0: Trying to associate with 00:18:f8:54:a3:d6 (SSID='myaccesspoint' freq=2437 MHz) May 27 09:16:12 myhostname kernel: [11362.580334] wlan0: authenticated May 27 09:16:12 myhostname kernel: [11362.580503] iwlwifi 0000:04:00.0 wlan0: disabling HT as WMM/QoS is not supported by the AP May 27 09:16:12 myhostname kernel: [11362.580516] iwlwifi 0000:04:00.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP May 27 09:16:12 myhostname kernel: [11362.583508] wlan0: associate with 00:18:f8:54:a3:d6 (try 1/3) May 27 09:16:12 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: authenticating -> associating May 27 09:16:12 myhostname wpa_supplicant[8946]: wlan0: Associated with 00:18:f8:54:a3:d6 May 27 09:16:12 myhostname kernel: [11362.585908] wlan0: RX AssocResp from 00:18:f8:54:a3:d6 (capab=0x411 status=0 aid=9) May 27 09:16:12 myhostname kernel: [11362.586781] wlan0: associated May 27 09:16:12 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associating -> associated May 27 09:16:13 myhostname kernel: [11362.947693] wlan0: deauthenticated from 00:18:f8:54:a3:d6 (Reason: 15) May 27 09:16:13 myhostname wpa_supplicant[8946]: wlan0: CTRL-EVENT-DISCONNECTED bssid=00:18:f8:54:a3:d6 reason=15 May 27 09:16:13 myhostname kernel: [11362.973461] cfg80211: Calling CRDA to update world regulatory domain May 27 09:16:13 myhostname kernel: [11362.975673] cfg80211: World regulatory domain updated: May 27 09:16:13 myhostname kernel: [11362.975675] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) May 27 09:16:13 myhostname kernel: [11362.975676] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975677] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975678] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975678] cfg80211: (5170000 KHz - 5250000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975679] cfg80211: (5735000 KHz - 5835000 KHz @ 80000 KHz), (N/A, 2000 mBm) May 27 09:16:13 myhostname kernel: [11362.975679] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 0 mBm) May 27 09:16:13 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: associated -> disconnected May 27 09:16:13 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: disconnected -> scanning May 27 09:16:14 myhostname NetworkManager[13992]: <warn> Activation (wlan0/wireless): association took too long. May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): device state change: config -> failed (reason 'no-secrets') [50 120 7] May 27 09:16:14 myhostname NetworkManager[13992]: <info> Marking connection 'Auto myaccesspoint' invalid. May 27 09:16:14 myhostname NetworkManager[13992]: <warn> Activation (wlan0) failed for connection 'Auto myaccesspoint' May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): device state change: failed -> disconnected (reason 'none') [120 30 0] May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): deactivating device (reason 'none') [0] May 27 09:16:14 myhostname NetworkManager[13992]: <info> (wlan0): supplicant interface state: scanning -> disconnected The things that jump out at me are "deauthenticating ... by local choice( reason=3)" and the lines that contain "(reason=15)". I've tried various fixes: iwconfig wlan0 power off killing wpa_supplicant connecting with iwconfig + dhclient instead of gnome's network -manager explicitly configuring wlan0 in /etc/network/interfaces creating a /etc/wpa_supplicant.conf file ...but nothing seems to work. I'm not sure what I did wrong, or what step I've skipped in trying to get wlan0 back as a non-bridged device -- I removed it from the bridge and then deleted the bridge itself. Any ideas?

    Read the article

  • Use preforker(ruby gem) with supervisor

    - by user1548832
    I also asked same question on stackoverflow.com http://stackoverflow.com/questions/13871169/use-preforkerruby-gem-with-supervisor But, superuser.com might much help to me. Can anyone amswer this? I want to run a server program using preforker ruby gem with supervisor. But error has occured. I wrote a following test program using preforker. #!/usr/bin/env ruby require 'rubygems' require 'preforker' Preforker.new(:app_name => 'test-preforker', :timeout => 60, :workers => 1) do |master| while master.wants_me_alive? do puts "hello" sleep 10 end end.run And a following supervisor config. [program:test-preforker] command=/home/tkono/tmp/test-preforker.rb stdout_logfile_maxbytes=1MB stderr_logfile_maxbytes=1MB stdout_logfile=/var/log/%(program_name)s.log stderr_logfile=/var/log/%(program_name)s.log autorestart=true Then, reload supervisor. # supervisorctl reload Restarted supervisord Here is the log file of supervisor. 2012-12-13 17:50:47,161 CRIT Supervisor running as root (no user in config file) 2012-12-13 17:50:47,163 WARN Included extra file "/etc/supervisor.d/test-preforker.ini" during parsing 2012-12-13 17:50:47,209 INFO RPC interface 'supervisor' initialized 2012-12-13 17:50:47,213 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-12-13 17:50:47,215 INFO supervisord started with pid 12437 2012-12-13 17:50:48,231 INFO spawned: 'test-preforker' with pid 12440 2012-12-13 17:50:48,233 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:49,248 INFO spawned: 'test-preforker' with pid 12441 2012-12-13 17:50:49,261 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:51,267 INFO spawned: 'test-preforker' with pid 12442 2012-12-13 17:50:51,284 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:54,305 INFO spawned: 'test-preforker' with pid 12443 2012-12-13 17:50:54,308 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:55,311 INFO gave up: test-preforker entered FATAL state, too many start retries too quickly Please tell me what is wrong? A program using preforker cannot run with supervisor? preforker https://github.com/dcadenas/preforker supervisor http://supervisord.org/index.html

    Read the article

  • Can not execute Sonar

    - by senzacionale
    my pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>RCC</groupId> <artifactId>tmp</artifactId> <name>tmp</name> <version>1.0</version> <build> <sourceDirectory>C:\Projekti\KIS\Model\src </sourceDirectory> <outputDirectory>C:\Projekti\KIS\Model\classes</outputDirectory> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.5</source> <target>1.5</target> <excludes> <exclude>**/*.*</exclude> </excludes> </configuration> </plugin> </plugins> </build> <properties> <sonar.dynamicAnalysis>false</sonar.dynamicAnalysis> </properties> </project> running sonnar C:\Projekti\Metrics>mvn sonar:sonar -e + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'sonar'. [INFO] ------------------------------------------------------------------------ [INFO] Building tmp [INFO] task-segment: [sonar:sonar] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] [sonar:sonar {execution: default-cli}] [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.1.2 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Can not execute Sonar Embedded error: Can not analyze the project org.apache.derby.jdbc.ClientDriver [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Can not execute Sonar at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: Can not execute Sonar at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:87) at org.codehaus.mojo.sonar.Bootstraper.start(Bootstraper.java:65) at org.codehaus.mojo.sonar.SonarMojo.execute(SonarMojo.java:117) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.plugin.MojoExecutionException: Can not analyze the project at org.sonar.maven2.BatchMojo.executeBatch(BatchMojo.java:152) at org.sonar.maven2.BatchMojo.execute(BatchMojo.java:131) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:82) ... 21 more Caused by: org.picocontainer.PicoLifecycleException: PicoLifecycleException: method 'public void org.sonar.api.database.AbstractDatabaseConnector.start()', instance 'org.sonar.api.database.DriverDatabaseConnector@c87621, java.lang.RuntimeException: wrapper at org.picocontainer.monitors.NullComponentMonitor.lifecycleInvocationFailed(NullComponentMonitor.java:77) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.monitorAndThrowReflectionLifecycleException(ReflectionLifecycleStrategy.java:132) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:115) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89) at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84) at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169) at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132) at org.picocontainer.behaviors.Stored.start(Stored.java:110) at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:996) at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:989) at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:746) at org.sonar.batch.AggregatorBatch.execute(AggregatorBatch.java:84) at org.sonar.maven2.BatchMojo.executeBatch(BatchMojo.java:149) ... 24 more Caused by: java.lang.RuntimeException: wrapper at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.monitorAndThrowReflectionLifecycleException(ReflectionLifecycleStrategy.java:130) ... 35 more Caused by: org.sonar.api.database.DatabaseException: Cannot open connection to database: SQL driver not found org.apache.derby.jdbc.ClientDriver at org.sonar.api.database.AbstractDatabaseConnector.testConnection(AbstractDatabaseConnector.java:182) at org.sonar.api.database.AbstractDatabaseConnector.start(AbstractDatabaseConnector.java:94) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110) ... 34 more Caused by: java.sql.SQLException: SQL driver not found org.apache.derby.jdbc.ClientDriver at org.sonar.api.database.DriverDatabaseConnector.getConnection(DriverDatabaseConnector.java:70) at org.sonar.api.database.AbstractDatabaseConnector.testConnection(AbstractDatabaseConnector.java:178) ... 40 more Caused by: java.lang.ClassNotFoundException: org.apache.derby.jdbc.ClientDriver at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at org.codehaus.classworlds.RealmClassLoader.loadClassDirect(RealmClassLoader.java:195) at org.codehaus.classworlds.DefaultClassRealm.loadClass(DefaultClassRealm.java:255) at org.codehaus.classworlds.DefaultClassRealm.loadClass(DefaultClassRealm.java:274) at org.codehaus.classworlds.RealmClassLoader.loadClass(RealmClassLoader.java:214) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:164) at org.sonar.api.database.DriverDatabaseConnector.getConnection(DriverDatabaseConnector.java:68) ... 41 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Wed Jun 09 12:12:55 CEST 2010 [INFO] Final Memory: 12M/23M [INFO] ------------------------------------------------------------------------ any idea why not working Sonar with maven?

    Read the article

  • Android - Key Dispatching Timed Out

    - by Donal Rafferty
    In my Android application I am getting a very strange crash, when I press a button (Image) on my UI the entire application freezes and after a couple of seconds I getthe dreaded force close dialog appearing. Here is what gets printed in the log: WARN/WindowManager(88): Key dispatching timed out sending to package name/Activity WARN/WindowManager(88): Dispatch state: {{KeyEvent{action=1 code=5 repeat=0 meta=0 scancode=231 mFlags=8} to Window{432bafa0 com.android.launcher/com.android.launcher.Launcher paused=false} @ 1281611789339 lw=Window{432bafa0 com.android.launcher/com.android.launcher.Launcher paused=false} lb=android.os.BinderProxy@431ee8e8 fin=false gfw=true ed=true tts=0 wf=false fp=false mcf=Window{4335fc58 package name/Activity paused=false}}} WARN/WindowManager(88): Current state: {{null to Window{4335fc58 package name/Activity paused=false} @ 1281611821193 lw=Window{4335fc58 package name/Activity paused=false} lb=android.os.BinderProxy@434c9bd0 fin=false gfw=true ed=true tts=0 wf=false fp=false mcf=Window{4335fc58 package name/Activity paused=false}}} INFO/ActivityManager(88): ANR in process: package name (last in package name) INFO/ActivityManager(88): Annotation: keyDispatchingTimedOut INFO/ActivityManager(88): CPU usage: INFO/ActivityManager(88): Load: 5.18 / 5.1 / 4.75 INFO/ActivityManager(88): CPU usage from 7373ms to 1195ms ago: INFO/ActivityManager(88): package name: 6% = 1% user + 5% kernel / faults: 7 minor INFO/ActivityManager(88): system_server: 5% = 4% user + 1% kernel / faults: 27 minor INFO/ActivityManager(88): tiwlan_wifi_wq: 3% = 0% user + 3% kernel INFO/ActivityManager(88): mediaserver: 0% = 0% user + 0% kernel INFO/ActivityManager(88): logcat: 0% = 0% user + 0% kernel INFO/ActivityManager(88): TOTAL: 12% = 5% user + 6% kernel + 0% softirq INFO/ActivityManager(88): Removing old ANR trace file from /data/anr/traces.txt INFO/Process(88): Sending signal. PID: 1812 SIG: 3 INFO/dalvikvm(1812): threadid=7: reacting to signal 3 INFO/dalvikvm(1812): Wrote stack trace to '/data/anr/traces.txt' This is the code for the Button (Image): findViewById(R.id.endcallimage).setOnClickListener(new OnClickListener() { public void onClick(View v) { mNotificationManager.cancel(2); Log.d("Handler", "Endcallimage pressed"); if(callConnected) elapsedTimeBeforePause = SystemClock.elapsedRealtime() - stopWatch.getBase(); try { serviceBinder.endCall(lineId); } catch (RemoteException e) { e.printStackTrace(); } dispatchKeyEvent(new KeyEvent(KeyEvent.ACTION_DOWN,KeyEvent.FLAG_SOFT_KEYBOARD)); dispatchKeyEvent(new KeyEvent(KeyEvent.ACTION_UP, KeyEvent.KEYCODE_BACK)); } }); If I comment the following out the pressing of the button (image) doesn't cause the crash: try { serviceBinder.endCall(lineId); } catch (RemoteException e) { e.printStackTrace(); } The above code calls down through several levels of the app and into the native layer (NDK), could the call passing through several objects be leading to the force close? It seems unlikely as several other buttons do the same without issue. How about the native layer? Could some code I've built with the NDK be causing the issue? Any other ideas as to what the cause of the issue might be?

    Read the article

  • How do I specify the manifest for side-by-side assemblies in my header file?

    - by sep
    I am developing in Visual C++ 2008 using MSMQ. In Windows Vista, the application cannot locate the mqrt.dll which is found at C:\Windows\winsxscd x86_microsoft-windows-msmq-runtime-core_31bf3856ad364e35_6. 0.6002.18005_none_574cf1cdb624ee17\mqrt.dll. The description of the manifest in WinSxS is: <assembly xmlns="urn:schemas-microsoft-com:asm.v3" manifestVersion="1.0" description="MSMQ core runtime component." displayName="MSMQ Core runtime component" company="Microsoft" copyright="Copyright (c) Microsoft Corporation. All Rights Reserved." creationTimeStamp="2005-03-11T01:47:18" lastUpdateTimeStamp="2005-03-11T01:48:59"> <assemblyIdentity name="Microsoft-Windows-msmq-runtime-core" version="6.0.6002.18005" processorArchitecture="x86" language="neutral" buildType="release" publicKeyToken="31bf3856ad364e35" versionScope="nonSxS" /> I added a #pragma comment into my header file: #pragma comment(linker, "\"/manifestdependency:name='Microsoft-Windows-msmq-runtime-core' version='6.0.6002.18005' processorArchitecture='x86' publicKeyToken='31bf3856ad364e35' language='neutral'\"") The manifest is embedded into the exe using mt.exe. But it does not work. The error message in sxstrace is: INFO: Resolving reference Microsoft-Windows-msmq-runtime-core,processorArchitecture="x86",publicKeyToken="31bf3856ad364e35",version="6.0.6002.18005". INFO: Resolving reference for ProcessorArchitecture x86. INFO: Resolving reference for culture Neutral. INFO: Applying Binding Policy. INFO: No publisher policy found. INFO: No binding policy redirect found. INFO: Begin assembly probing. INFO: Did not find the assembly in WinSxS. INFO: Attempt to probe manifest at C:\Windows\assembly\GAC_32\Microsoft-Windows-msmq-runtime-core\6.0.6002.18005__31bf3856ad364e35\Microsoft-Windows-msmq-runtime-core.DLL. INFO: Attempt to probe manifest at c:\qt\datamon\bin\Microsoft-Windows-msmq-runtime-core.DLL. INFO: Attempt to probe manifest at c:\qt\datamon\bin\Microsoft-Windows-msmq-runtime-core.MANIFEST. INFO: Attempt to probe manifest at c:\qt\datamon\bin\Microsoft-Windows-msmq-runtime-core\Microsoft-Windows-msmq-runtime-core.DLL. INFO: Attempt to probe manifest at c:\qt\datamon\bin\Microsoft-Windows-msmq-runtime-core\Microsoft-Windows-msmq-runtime-core.MANIFEST. INFO: Did not find manifest for culture Neutral. INFO: End assembly probing. ERROR: Cannot resolve reference Microsoft-Windows-msmq-runtime-core,processorArchitecture="x86",publicKeyToken="31bf3856ad364e35",version="6.0.6002.18005". ERROR: Activation Context generation failed. I tried the following pragma, but WinSxS does not even try to resolve msmq (probably because of the versionScope attribute): #pragma comment(linker, "\"/manifestdependency:name='Microsoft-Windows-msmq-runtime-core' version='6.0.6002.18005' processorArchitecture='x86' publicKeyToken='31bf3856ad364e35' language='neutral' buildType='release' versionScope='nonSxS'\"") What is the correct pragma to use?

    Read the article

  • ASP.NET: Large number of Session_Start with same session id

    - by Jaap
    I'm running a ASP.NET website on my development box (.NET 2.0 on Vista/IIS7). The Session_Start method in global.asax.cs logs every call to a file (log4net). The Session_End method also logs every call. I'm using InProc session state, and set the session timeout to 5 mins (to avoid waiting for 20 mins). I hit the website, wait for 5 minutes unit I see the Session_End logging. Then I F5 the website. The browsers still has the session cookie and sends it to the server. Session_Start is called and a new session is created using the same session id (btw: I need this to be the same session id, because it is used to store data in database). Result: Every time I hit F5 on a previously ended session, the Session_Start method is called. When I open a different browser, the Session_Start method is called just once. Then after 5 minutes the Session_End each F5 causes the Session_Start method to execute. Can anyone explain why this is happening? Update: After the Session timeout, all subsequent requests have a session start & session end. So in the end my question is: why are the sessions on these subsequent request closed immediatly? 2010-02-09 14:49:08,754 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,754 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET http://localhost:80/js/settings.js 2010-02-09 14:49:08,756 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,760 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,760 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /css/package.aspx?name=core 2010-02-09 14:49:08,761 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,762 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,762 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /js/package.aspx?name=all 2010-02-09 14:49:08,763 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,763 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,763 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /css/package.aspx?name=rest 2010-02-09 14:49:08,764 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq 2010-02-09 14:49:08,764 INFO Global.asax[7486] [(null)] - Session started. SID=nzponumvf1hbaniverffp4mq host=127.0.0.1 2010-02-09 14:49:08,765 INFO Global.asax[7486] [nzponumvf1hbaniverffp4mq] - Request start: GET /css/package.aspx?name=vacation 2010-02-09 14:49:08,765 INFO Global.asax[7486] [(null)] - Session ended. SID=nzponumvf1hbaniverffp4mq web.config relevant section: <system.web> <compilation debug="true" /> <sessionState timeout="2" regenerateExpiredSessionId="false" /> </system.web>

    Read the article

  • Sonar default, meet "container state was: CONSTRUCTED"

    - by larry cai
    Environment: hudson/sonar/maven2 in ubuntu locally with default parameters And I got the log from hudson below, I can't figure out where is the problem. [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.0.1 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] Database dialect class org.sonar.api.database.dialect.Derby [INFO] ------------- Analyzing Game of Life business logic module [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Can not execute Sonar Embedded error: Can not analyze the project Cannot stop. Current container state was: CONSTRUCTED [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Can not execute Sonar And I notice it also has problem when run it command line without hudson mvn sonar:sonar

    Read the article

  • Spring transaction : Transaction not active

    - by Videanu Adrian
    i develop a app using struts2, spring 3.1, Jpa2 and Hibernate. From Spring i use transactions and IoC. so, i have an ajax code block that calls for a struts2 action every second (this is happening for every user that is logged into application (simultaneous users are around 20-30 at a time)). this action name is PopupAction public class PopupAction extends VActionBase implements ServletRequestAware { private static final long serialVersionUID = -293004532677112584L; private iIntermedService intermedService; private HttpServletRequest servletRequest; @Override public String execute() { Integer agentId = (Integer) session.get("USER_AGENT_ID"); Intermed iObj; try { iObj = intermedService.getIntermed(agentId,locationsString); } catch (Exception e) { logger.error("Cannot get Intermed!!! "+e.getMessage()); return ERROR; } return SUCCESS; } } and then i have the service class : @Transactional(readOnly=true) public class IntermedServiceImpl extends GenericIService<Intermed, Integer> implements iIntermedService { @Override public Intermed getIntermed (int agentId,String queueIds) throws Exception { Intermed intermedObj = null; //TODO - find a better implementation for this queueIds parameter!!!! try{ String sql = "SELECT i FROM bla bla bla.....)"; Query q = this.em.createQuery(sql); List<Intermed> iList = q.getResultList(); if (iList.size() == 1){ intermedObj = (Intermed) iList.get(0); //get latest object from DB em.refresh(intermedObj); } }catch(Exception e){ e.printStackTrace(); logger.error(e.getCause()+e.getMessage()); throw e; } return intermedObj; } } here is the spring configuration : <bean id="emfI" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="inboundDS" /> <property name="persistenceUnitName" value="I2PU"/> <!-- GlassFish load-time weaving setup --> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.glassfish.GlassFishLoadTimeWeaver"/> </property> </bean> <tx:annotation-driven transaction-manager="txManagerI" /> <tx:advice id="txManagerInboundAdvice" transaction-manager="txManagerI"> <tx:attributes> <tx:method name="*" rollback-for="java.lang.Exception"/> </tx:attributes> </tx:advice> I have names for transactionManager because i have 3 datasources and 3 transaction managers. the problem is that my glassfish logs are full of messages like these: -- removed in order to be able to add more recent logs -- So the cause is : Caused by: java.lang.IllegalStateException: Transaction not active. But i have no idea what can cause this. Any help ? thanks Updates So i have added to @Transactional annotation the transaction manager name that he has to use, but this still does not solved my problem. I have captured a log from the time that the transaction is created until i got that exception: 2012-02-08T15:08:55.954+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractBeanFactory.java:245) - Returning cached instance of singleton bean 'txManagerVA' 2012-02-08T15:08:55.962+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:365) - Creating new transaction with name [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT,readOnly; '',-java.lang.Exception 2012-02-08T15:08:55.967+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:368) - Opened new EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] for JPA transaction 2012-02-08T15:08:55.976+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:400) - Exposing JPA transaction as JDBC transaction [org.springframework.orm.jpa.vendor.HibernateJpaDialect$HibernateConnectionHandle@725b979b] 2012-02-08T15:08:55.977+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.jdbc.datasource.ConnectionHolder@4fb57177] for key [com.sun.gjc.spi.jdbc40.DataSource40@75fa4851] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.978+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.orm.jpa.EntityManagerHolder@112c6483] for key [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean@47d4f12f] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.979+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:272) - Initializing transaction synchronization 2012-02-08T15:08:55.980+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionAspectSupport.java:362) - Getting transaction for [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed] 2012-02-08T15:08:55.983+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (ExtendedEntityManagerCreator.java:423) - Starting resource local transaction on application-managed EntityManager [org.hibernate.ejb.EntityManagerImpl@46d002f4] 2012-02-08T15:08:55.984+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:193) - Bound value [org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization@797add43] for key [org.hibernate.ejb.EntityManagerImpl@46d002f4] to thread [thread-pool-1-80(80)] 2012-02-08T15:08:55.986+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (ExtendedEntityManagerCreator.java:400) - Joined local transaction 2012-02-08T15:08:55.991+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionAspectSupport.java:391) - Completing transaction for [xxx.vs.common.services.inbound.IntermedServiceImpl.getIntermed] 2012-02-08T15:08:55.992+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:922) - Triggering beforeCommit synchronization 2012-02-08T15:08:55.994+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:935) - Triggering beforeCompletion synchronization 2012-02-08T15:08:56.001+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization@797add43] for key [org.hibernate.ejb.EntityManagerImpl@46d002f4] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.002+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:752) - Initiating transaction commit 2012-02-08T15:08:56.003+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:507) - Committing JPA transaction on EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] 2012-02-08T15:08:56.008+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:948) - Triggering afterCommit synchronization 2012-02-08T15:08:56.010+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (AbstractPlatformTransactionManager.java:964) - Triggering afterCompletion synchronization 2012-02-08T15:08:56.011+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:331) - Clearing transaction synchronization 2012-02-08T15:08:56.012+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.orm.jpa.EntityManagerHolder@112c6483] for key [org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean@47d4f12f] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.021+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (TransactionSynchronizationManager.java:243) - Removed value [org.springframework.jdbc.datasource.ConnectionHolder@4fb57177] for key [com.sun.gjc.spi.jdbc40.DataSource40@75fa4851] from thread [thread-pool-1-80(80)] 2012-02-08T15:08:56.021+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (JpaTransactionManager.java:593) - Closing JPA EntityManager [org.hibernate.ejb.EntityManagerImpl@edf83f9] after transaction 2012-02-08T15:08:56.022+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|DEBUG [thread-pool-1-80(80)] (EntityManagerFactoryUtils.java:343) - Closing JPA EntityManager 2012-02-08T15:08:56.023+0200|INFO||_ThreadID=184;_ThreadName=Thread-5;|ERROR [thread-pool-1-80(80)] (PopupAction.java:39) - Cannot get Intermed!!! Transaction not active; nested exception is java.lang.IllegalStateException: Transaction not active 2012-02-08T15:08:56.024+0200|SEVERE||_ThreadID=184;_ThreadName=Thread-5;|org.springframework.dao.InvalidDataAccessApiUsageException: Transaction not active; nested exception is java.lang.IllegalStateException: Transaction not active at org.springframework.orm.jpa.EntityManagerFactoryUtils.convertJpaAccessExceptionIfPossible(EntityManagerFactoryUtils.java:298) at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:106) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.convertException(ExtendedEntityManagerCreator.java:501) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.afterCommit(ExtendedEntityManagerCreator.java:481) at org.springframework.transaction.support.TransactionSynchronizationUtils.invokeAfterCommit(TransactionSynchronizationUtils.java:133) at org.springframework.transaction.support.TransactionSynchronizationUtils.triggerAfterCommit(TransactionSynchronizationUtils.java:121) at org.springframework.transaction.support.AbstractPlatformTransactionManager.triggerAfterCommit(AbstractPlatformTransactionManager.java:950) at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:796) at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723) at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202) at $Proxy325.getIntermed(Unknown Source) at xxx.vs.common.actions.PopupAction.execute(PopupAction.java:37) at sun.reflect.GeneratedMethodAccessor1581.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:453) at com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:292) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:255) at org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:256) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:176) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:265) at org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:138) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:211) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:190) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:90) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:243) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:100) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:141) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:145) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:171) at com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:98) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:176) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:192) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:187) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at xxx.vs.common.utils.AuthenticationInterceptor.intercept(AuthenticationInterceptor.java:78) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at com.googlecode.sslplugin.interceptors.SSLInterceptor.intercept(SSLInterceptor.java:128) at com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:249) at org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:54) at org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:510) at org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) at org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:217) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655) at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595) at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:98) at com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:91) at org.apache.catalina 2012-02-08T15:08:56.024+0200|SEVERE||_ThreadID=184;_ThreadName=Thread-5;|.core.StandardHostValve.invoke(StandardHostValve.java:162) at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:330) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:231) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:174) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:828) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:725) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1019) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:225) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59) at com.sun.grizzly.ContextTask.run(ContextTask.java:71) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513) at java.lang.Thread.run(Thread.java:679) Caused by: java.lang.IllegalStateException: Transaction not active at org.hibernate.ejb.TransactionImpl.commit(TransactionImpl.java:69) at org.springframework.orm.jpa.ExtendedEntityManagerCreator$ExtendedEntityManagerSynchronization.afterCommit(ExtendedEntityManagerCreator.java:478) ... 93 more so again..... any ideea ?

    Read the article

  • Encode URL while send ajax

    - by meotimdihia
    I use cakePHP and it generate ajax /animemanga/animes/search/page:1?type%5B0%5D=3&amp;genre%5B0%5D=20&amp;genre%5B1%5D=4&amp;info%5B0%5D=episodes&amp;info%5B1%5D=released&amp;info%5B2%5D=rating&amp;info%5B3%5D=synopsis&amp;info%5B4%5D=completed&amp;info%5B5%5D=rating_count&amp;info%5B6%5D=name&amp;info%5B7%5D=id&amp;info%5B8%5D=name&amp;info%5B9%5D=id cakePHP encoded: & = &apm; will create error while use with Ajax. I use Jquery, browser Opera. how can this solve ?

    Read the article

  • How do I set PATH variables for all users on a server?

    - by Rob S.
    I just finished installing LaTeX for my company's Ubuntu server that we all SSH into to use. At the end of the install it says this: Add /usr/local/texlive/2010/texmf/doc/man to MANPATH, if not dynamically determined. Add /usr/local/texlive/2010/texmf/doc/info to INFOPATH. Most importantly, add /usr/local/texlive/2010/bin/x86_64-linux to your PATH for current and future sessions. So, my question is simply: How do I do this so that these variables are set for all users on the system? (And yes, I have sudo permissions). Thanks in advance to any and all responses I receive.

    Read the article

  • Wireless suddenly dropping with a Ralink RT2870

    - by cwwk
    I have a Linksys WUSB600N v1 Dual-Band Wireless-N Network Adapter Ralink RT2870 USB dongle that worked flawlessly in 11.10. Since upgrading, I can't keep a connection for more than five minutes. The wild world of Google was unable to provide a solution, and I would rather not downgrade although that remains a possibility. Results of syslog: slack@slack:~$ tail /var/log/syslog Apr 26 20:26:10 slack AptDaemon: INFO: Initializing daemon Apr 26 20:26:10 slack AptDaemon.PackageKit: INFO: Initializing PackageKit compat layer Apr 26 20:26:10 slack dbus[972]: [system] Successfully activated service 'org.freedesktop.PackageKit' Apr 26 20:26:10 slack AptDaemon.PackageKit: INFO: Initializing PackageKit transaction Apr 26 20:26:10 slack AptDaemon.Worker: INFO: Simulating trans: /org/debian/apt/transaction/aaed4e38eb3c41ad86d2bab6ca03ee7c Apr 26 20:26:10 slack AptDaemon.Worker: INFO: Processing transaction /org/debian/apt/transaction/aaed4e38eb3c41ad86d2bab6ca03ee7c Apr 26 20:26:12 slack dbus[972]: [system] Activating service name='com.ubuntu.SystemService' (using servicehelper) Apr 26 20:26:12 slack dbus[972]: [system] Successfully activated service 'com.ubuntu.SystemService' Apr 26 20:30:26 slack AptDaemon.PackageKit: INFO: Get updates() Apr 26 20:30:27 slack AptDaemon.Worker: INFO: Finished transaction /org/debian/apt/transaction/aaed4e38eb3c41ad86d2bab6ca03ee7c Any suggestions?

    Read the article

  • Scala: Recursively building all pathes in a graph?

    - by DarqMoth
    Trying to build all existing paths for an udirected graph defined as a map of edges using the following algorithm: Start: with a given vertice A Find an edge (X.A, X.B) or (X.B, X.A), add this edge to path Find all edges Ys fpr which either (Y.C, Y.B) or (Y.B, Y.C) is true For each Ys: A=B, goto Start Providing edges are defined as the following map, where keys are tuples consisting of two vertices: val edges = Map( ("n1", "n2") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n1") -> "n5n1", ("n5", "n4") -> "n5n4") As an output I need to get a list of ALL pathes where each path is a list of adjecent edges like this: val allPaths = List( List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) //... //... more pathes to go } Note: Edge XY = (x,y) - "xy" and YX = (y,x) - "yx" exist as one instance only, either as XY or YX So far I have managed to implement code that duplicates edges in the path, which is wrong and I can not find the error: object Graph2 { type Vertice = String type Edge = ((String, String), String) type Path = List[((String, String), String)] val edges = Map( //(("v1", "v2") , "v1v2"), (("v1", "v3") , "v1v3"), (("v3", "v4") , "v3v4") //(("v5", "v1") , "v5v1"), //(("v5", "v4") , "v5v4") ) def main(args: Array[String]): Unit = { val processedVerticies: Map[Vertice, Vertice] = Map() val processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)] = Map() val path: Path = List() println(buildPath(path, "v1", processedVerticies, processedEdges)) } /** * Builds path from connected by edges vertices starting from given vertice * Input: map of edges * Output: list of connected edges like: List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) */ def buildPath(path: Path, vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)]): List[Path] = { println("V: " + vertice + " VM: " + processedVerticies + " EM: " + processedEdges) if (!processedVerticies.contains(vertice)) { val edges = children(vertice) println("Edges: " + edges) val x = edges.map(edge => { if (!processedEdges.contains(edge._1)) { addToPath(vertice, processedVerticies.++(Map(vertice -> vertice)), processedEdges, path, edge) } else { println("ALready have edge: "+edge+" Return path:"+path) path } }) val y = x.toList y } else { List(path) } } def addToPath( vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)], path: Path, edge: Edge): Path = { val newPath: Path = path ::: List(edge) val key = edge._1 val nextVertice = neighbor(vertice, key) val x = buildPath (newPath, nextVertice, processedVerticies, processedEdges ++ (Map((vertice, nextVertice) -> (vertice, nextVertice))) ).flatten // need define buidPath type x } def children(vertice: Vertice) = { edges.filter(p => (p._1)._1 == vertice || (p._1)._2 == vertice) } def containsPair(x: (Vertice, Vertice), m: Map[(Vertice, Vertice), (Vertice, Vertice)]): Boolean = { m.contains((x._1, x._2)) || m.contains((x._2, x._1)) } def neighbor(vertice: String, key: (String, String)): String = key match { case (`vertice`, x) => x case (x, `vertice`) => x } } Running this results in: List(List(((v1,v3),v1v3), ((v1,v3),v1v3), ((v3,v4),v3v4))) Why is that?

    Read the article

  • JQuery remove image src prefix

    - by mtwallet
    Hi. I have a bunch of images on a page that are contained within a div with a class of content-block like this: <div class="content-block"> <img src="../images/path/path/image.jpg" alt="blah" title="blah" /> <img src="../images/path/path/image.jpg" alt="blah" title="blah" /> <img src="../images/path/path/image.jpg" alt="blah" title="blah" /> <img src="../images/path/path/image.jpg" alt="blah" title="blah" /> <img src="../images/path/path/image.jpg" alt="blah" title="blah" /> </div> I am loading this html into a div via ajax. When I load it in I need to remove the "../" before the image path, after it loads in so the path remains correct. Can this be done with JQuery? Many thanks in advance.

    Read the article

  • Output php mail calls to log file

    - by Tom McQuarrie
    This question relates to the question found here: Find the php script thats sending mails Trying to do the exact same thing but can't get the log to output what I need. Not too experienced with serverfault and ideally I'd post my followup on the original question, or PM adam to see if he ever found a solution, but looks as though server fault doesn't work that way. I can post an "answer" but that's definitely not what this is. I have a script located at /usr/local/bin/sendmail-php-logged, with the following: #!/bin/sh logger -p mail.info sendmail-php: site=${HTTP_HOST}, client=${REMOTE_ADDR}, script=${SCRIPT_NAME}, filename=${SCRIPT_FILENAME}, docroot=${DOCUMENT_ROOT}, pwd=${PWD}, uid=${UID}, user=$(whoami) /usr/sbin/sendmail -t -i $* This is logging to /var/log/maillog, but as Adam mentions in his question, none of the server variables work. Output I'm getting is: Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:03 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:05 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:11 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:14 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:29 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:41 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root User ID, current user, and pwd are all working, probably because they're globally accessible script resources, and not specific to PHP, like all the others are. I've tried using other server variables as per labradort's instructions, but no joy. Here's some sample tests: logger -p mail.info sendmail-php SCRIPT_NAME: ${SCRIPT_NAME} logger -p mail.info sendmail-php SCRIPT_FILENAME: ${SCRIPT_FILENAME} logger -p mail.info sendmail-php PATH_INFO: ${PATH_INFO} logger -p mail.info sendmail-php PHP_SELF: ${PHP_SELF} logger -p mail.info sendmail-php DOCUMENT_ROOT: ${DOCUMENT_ROOT} logger -p mail.info sendmail-php REMOTE_ADDR: ${REMOTE_ADDR} logger -p mail.info sendmail-php SCRIPT_NAME: $SCRIPT_NAME logger -p mail.info sendmail-php SCRIPT_FILENAME: $SCRIPT_FILENAME logger -p mail.info sendmail-php PATH_INFO: $PATH_INFO logger -p mail.info sendmail-php PHP_SELF: $PHP_SELF logger -p mail.info sendmail-php DOCUMENT_ROOT: $DOCUMENT_ROOT logger -p mail.info sendmail-php REMOTE_ADDR: $REMOTE_ADDR And the output: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: I'm running php 5.3.10. Unfortunately register_globals is on, for compatibility with legacy systems, but you wouldn't think that would cause the environment variables to stop working. If someone can give me some hints as to why this might not be working I'll be a very happy man :)

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >