Search Results

Search found 18475 results on 739 pages for 'auth to local'.

Page 196/739 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • Running two wsgi applications on the same server gdal org exception with apache2/modwsgi

    - by monkut
    I'm trying to run two wsgi applications, one django and the other tilestache using the same server. The tilestache server accesses the db via django to query the db. In the process of serving tiles it performs a transform on the incoming bbox, and in this process hit's the following error. The transform works without error for the specific bbox polygon when run manually from the python shell: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 325, in __call__ mimetype, content = requestHandler(self.config, environ['PATH_INFO'], environ['QUERY_STRING']) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 231, in requestHandler mimetype, content = getTile(layer, coord, extension) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 84, in getTile tile = layer.render(coord, format) File "/usr/lib/python2.7/dist-packages/TileStache/Core.py", line 295, in render tile = provider.renderArea(width, height, srs, xmin, ymin, xmax, ymax, coord.zoom) File "/var/www/tileserver/providers.py", line 59, in renderArea bbox.transform(METERS_SRID) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/geos/geometry.py", line 520, in transform g = gdal.OGRGeometry(self.wkb, srid) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 131, in __init__ self.__class__ = GEO_CLASSES[self.geom_type.num] File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 245, in geom_type return OGRGeomType(capi.get_geom_type(self.ptr)) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geomtype.py", line 43, in __init__ raise OGRException('Invalid OGR Integer Type: %d' % type_input) OGRException: Invalid OGR Integer Type: 1987180391 I think I've hit the non thread safe issue with GDAL, metioned on the django site. Is there a way I could configure this so that it would work? Apache Version: Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured Apache apache2/sites-available/default: <VirtualHost *:80> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess lbs processes=2 maximum-requests=500 threads=1 WSGIProcessGroup lbs WSGIScriptAlias / /var/www/bin/apache/django.wsgi Alias /static /var/www/lbs/static/ </VirtualHost> <VirtualHost *:8080> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess tilestache processes=1 maximum-requests=500 threads=1 WSGIProcessGroup tilestache WSGIScriptAlias / /var/www/bin/tileserver/tilestache.wsgi </VirtualHost> Django Version: 1.4 httpd.conf: Listen 8080 NameVirtualHost *:8080 UPDATE I've added the a test.wsgi script to determine if the GLOBAL interpreter setting is correct, as mentioned by graham and described here: http://code.google.com/p/modwsgi/wiki/CheckingYourInstallation#Sub_Interpreter_Being_Used It seems to show the expected result: [Tue Aug 14 10:32:01 2012] [notice] Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured -- resuming normal operations [Tue Aug 14 10:32:01 2012] [info] mod_wsgi (pid=29891): Attach interpreter ''. I've worked around the issue for now by changing the srs used in the db so that the transform is unnecessary in tilestache app. I don't understand why the transform() method, when called in the django app works, but then in the tilestache app fails. tilestache.wsgi #!/usr/bin/python import os import time import sys import TileStache current_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.realpath(os.path.join(current_dir, "..", "..")) sys.path.append(project_dir) sys.path.append(current_dir) os.environ['DJANGO_SETTINGS_MODULE'] = 'bin.settings' sys.stdout = sys.stderr # wait for the apache django lbs server to start up, # --> in order to retrieve the tilestache cfg time.sleep(2) tilestache_config_url = "http://127.0.0.1/tilestache/config/" application = TileStache.WSGITileServer(tilestache_config_url) UPDATE 2 So it turned out I did need to use a projection other than the google (900913) one in the db. So my previous workaround failed. While I'd like to fix this issue, I decided to work around the issue this type by making a django view that performs the transform needed. So now tilestache requests the data through the django app and not internally.

    Read the article

  • Linux pptp client stops working after several hours

    - by Aron Rotteveel
    Here's the situation: Setup: 1 Windows Server 2008 machine acting as a Domain Controller and RRAS server 1 CentOS machine in a datacentre located elsewhere PPTP client running on CentOS machine, connected to the DC via When I connect to the DC, everything is working fine. I have set up a static IP for the dialup connection in my RRAS server so that the CentOS machine is automatically assigned the IP 192.168.1.240. Inside the VPN, it is not possible to access this machine on the local IP-address. Perfect. However, after several hours, it simply seems to stop working (IE: I cannot ping to or from this machine on the local network). The strange thing is, however: The DC shows the VPN client as still being connected The CentOS machine shows the network interface as being up There are no entries in my /var/log/messages that indicate a problem Output from ifconfig: ppp0 Link encap:Point-to-Point Protocol inet addr:192.168.1.240 P-t-P:192.168.1.160 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1396 Metric:1 RX packets:43 errors:0 dropped:0 overruns:0 frame:0 TX packets:58 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:4511 (4.4 KiB) TX bytes:15071 (14.7 KiB) Output from route -n: 192.168.1.160 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ppp0 I have the following in my ip-up.local: route add -net 192.168.1.0 netmask 255.255.255.0 dev ppp0 The situation can be easily fixed by issueing a killall pppd and re-connecting. However, I obviously do not want to do this every X-hours or so. I have tried running pppd with both the debug as the kdebug flag but cannot find the cause of this problem. Currently, my ppp0 network interface seems to be running and the last log lines mentioning it are: Feb 19 14:10:40 graviton pppd[10934]: local IP address 192.168.1.240 Feb 19 14:10:40 graviton pppd[10934]: remote IP address 192.168.1.160 Feb 19 14:10:40 graviton pppd[10934]: Script /etc/ppp/ip-up started (pid 10952) Feb 19 14:10:40 graviton pppd[10934]: Script /etc/ppp/ip-up finished (pid 10952), status = 0x0 Feb 19 14:11:27 graviton pptp[10935]: anon log[decaps_gre:pptp_gre.c:414]: buffering packet 190 (expecting 189, lost or reordered) Feb 19 14:11:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Request received. Feb 19 14:11:37 graviton pptp[10942]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 6 'Echo-Reply' Feb 19 14:12:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Request received. Feb 19 14:12:37 graviton pptp[10942]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 6 'Echo-Reply' Feb 19 14:12:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:13:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:14:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:15:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:16:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:19:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:677]: Echo Reply received. Feb 19 14:19:37 graviton pptp[10942]: anon log[logecho:pptp_ctrl.c:679]: no more Echo Reply/Request packets will be reported. I have enabled the persist option. The network interface is still running, but it is still impossible to send data through the VPN. Any help is appreciated.

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a new account

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. ***UPDATE More details in testing. This Sharepoint is in an AD Child domain (clients.mycompany.local), which is sub to the root of the AD tree (mycompany.local). The user is in the parent tree (as are half of the other functional users. I have elevated the user rights to Domain. In looking at the logs, it seems that the Sharepoint server is trying to authenticate them by querying the DC for the clients domain (which is the way it normally works and still works for all existing identically configured users). I think if I could force it to authenticate up to the top domain DC then it would be ok?? I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. Logs by request from matt b Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 7888 Task Category: Office Server General Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: A runtime exception was detected. Details follow. Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) – TrevJen 19 hours ago Techinal Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) at Microsoft.SharePoint.SPField.UpdateCore(Boolean bToggleSealed) – TrevJen 19 hours ago at Microsoft.SharePoint.SPField.Update() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.PushSchemaToList(Boolean& bAddedColumn) at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.SynchFull() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.Synch() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) – TrevJen 19 hours ago Log Name: Application Source: Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 5553 Task Category: User Profiles Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: failure trying to synch site 6fea15e2-0899-4c19-9016-44d77834c018 for ContentDB b2002b0b-3d4c-411a-8c4f-3d047ca9322c WebApp 3aff7051-455d-4a70-a377-5b1c36df618e. Exception message was Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)). – TrevJen 18 hours ago

    Read the article

  • iptables DNS resolution

    - by Favolas
    I have a virtual machine with Fedora 19 acting as a router. This machine as an interface (p8p1) with the IP 172.16.1.254 that is connected to another machine (IP 172.16.1.1) that's simulating the external network. I've installed snort 2.9.2.2, applied the snortsam-2.9.2.2.diff.gz patch and installed snortsam 2.70 on the routermachine In snort.conf besides altering some RULE_PATH I believe I've only added the following line to the file. output alert_fwsam: 127.0.0.1:898/password After doing this two comands: ifconfig p8p1 promisc /usr/local/snort/bin/snort -v -i p8p1 If I ping from the external network to the router IP, I can see the info about the pings. One of the rules that I have is icmp-info.rules that as this single line: alert icmp $EXTERNAL_NET any -> $HOME_NET any (msg:"ICMP-INFO Echo Reply"; icode:0; itype:0; classtype:misc-activity; sid:408; rev:6;fwsam: src, 5 minutes;) snortsam.conf as this data: defaultkey password accept localhost keyinterval 30 minutes dontblock 192.168.1.1 # rede local rollbackhosts 50 rollbackthreshold 20 / 30 secs rollbacksleeptime 1 minute logfile /var/log/snort/snortsam.log loglevel 3 daemon nothreads # linha importante para gerar os bloqueios via iptables iptables p8p1 LOG bindip 127.0.0.1 Now I run this command: /usr/local/snort/bin/snort -u snort -i p8p1 -c /etc/snort/snort.conf -l /var/log/snort -Dq Terminal gives this message: Spawning daemon child... My daemon child 2080 lives... Daemon parent exiting (0) and when I runsnortsam in terminal i got this: SnortSam, v 2.70. Copyright (c) 2001-2009 Frank Knobbe . All rights reserved. Plugin 'fwsam': v 2.5, by Frank Knobbe Plugin 'fwexec': v 2.7, by Frank Knobbe Plugin 'pix': v 2.9, by Frank Knobbe Plugin 'ciscoacl': v 2.12, by Ali Basel <[email protected]> Plugin 'cisconullroute': v 2.5, by Frank Knobbe Plugin 'cisconullroute2': v 2.2, by Wouter de Jong <[email protected]> Plugin 'netscreen': v 2.10, by Frank Knobbe Plugin 'ipchains': v 2.8, by Hector A. Paterno <[email protected]> Plugin 'iptables': v 2.9, by Fabrizio Tivano <[email protected]>, Luis Marichal <[email protected]> Plugin 'ebtables': v 2.4, by Bruno Scatolin <[email protected]> Plugin 'watchguard': v 2.7, by Thomas Maier <[email protected]> Plugin 'email': v 2.12, by Frank Knobbe Plugin 'email-blocks-only': v 2.12, by Frank Knobbe Plugin 'snmpinterfacedown': v 2.3, by Ali BASEL <[email protected]> Plugin 'forward': v 2.8, by Frank Knobbe Parsing config file /etc/snortsam.conf... Linking plugin 'iptables'... Checking for existing state file "/var/db/snortsam.state". Found. Reading state file. Starting to listen for Snort alerts. and snortsam.log as an entry like this 2013/10/25, 10:15:17, -, 1, snortsam, Starting to listen for Snort alerts. Now, from the external machine I do ping 172.16.1.254 and it starts showing the info and an alert file is created in /var/log/snort/ that as the info about the PINGS. Something like: [**] [1:408:6] ICMP-INFO Echo Reply [**] [Classification: Misc activity] [Priority: 3] 10/25-10:35:16.061319 172.16.1.254 -> 172.16.1.1 ICMP TTL:64 TOS:0x0 ID:38720 IpLen:20 DgmLen:84 Type:0 Code:0 ID:1389 Seq:1 ECHO REPLY Also, if I run instead /usr/local/snort/bin/snort snort -v -i p8p1 i got this message: Running in packet dump mode --== Initializing Snort ==-- Initializing Output Plugins! Snort BPF option: snort pcap DAQ configured to passive. The DAQ version does not support reload. Acquiring network traffic from "p8p1". ERROR: Can't set DAQ BPF filter to 'snort' (pcap_daq_set_filter: pcap_compile: syntax error)! Fatal Error, Quitting.. So, this are my questions: Shouldn't snortsam block the PING? Is that DAQ error causing the problem? If so, How can I solve it?

    Read the article

  • cron.daily not running at the time it should?

    - by Mariano Martinez Peck
    My /etc/cron.daily scripts seem to be executing far later from what I understand they should. I am in Ubuntu and anacron is installed. If I do a sudo cat /var/log/syslog | grep cron I get something like: Aug 23 01:17:01 mymachine CRON[25171]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 02:17:01 mymachine CRON[25588]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 03:17:01 mymachine CRON[26026]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 03:25:01 mymachine CRON[30320]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )) Aug 23 04:17:01 mymachine CRON[26363]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 05:17:01 mymachine CRON[26770]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 06:17:01 mymachine CRON[27168]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 07:17:01 mymachine CRON[27547]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 07:30:01 mymachine CRON[2249]: (root) CMD (start -q anacron || :) Aug 23 07:30:02 mymachine anacron[2252]: Anacron 2.3 started on 2014-08-23 Aug 23 07:30:02 mymachine anacron[2252]: Will run job `cron.daily' in 5 min. Aug 23 07:30:02 mymachine anacron[2252]: Jobs will be executed sequentially Aug 23 07:35:02 mymachine anacron[2252]: Job `cron.daily' started As you can see, at 3:25 it tried to do something. But the cron.daily execution started really at 7:35. My /etc/crontab is: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 3 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # From what I understand, daily scripts are indeed for 3:25. My /etc/anacrontab is: # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root LOGNAME=root # These replace cron's entries 1 5 cron.daily run-parts --report /etc/cron.daily 7 10 cron.weekly run-parts --report /etc/cron.weekly @monthly 15 cron.monthly run-parts --report /etc/cron.monthly So...does someone know why my cron started to do something at 3:25 but then really start the jobs at 7:35? Also..as you can see in the log, hourly jobs are being executed at correct time: hour and 17 minutes, which is exactly what I have in /etc/crontab Finally, from the logs, it seems my daily jobs are being actually run by anacron rather than cron? So cron finds nothing to run (at 3:25) and then anacron runs the jobs at 7:35? If true, how can I fix this? Thanks in advance,

    Read the article

  • How to automate org-refile for multiple todo

    - by lawlist
    I'm looking to automate org-refile so that it will find all of the matches and re-file them to a specific location (but not archive). I found a fully automated method of archiving multiple todo, and I am hopeful to find or create (with some help) something similar to this awesome function (but for a different heading / location other than archiving): https://github.com/tonyday567/jwiegley-dot-emacs/blob/master/dot-org.el (defun org-archive-done-tasks () (interactive) (save-excursion (goto-char (point-min)) (while (re-search-forward "\* \\(None\\|Someday\\) " nil t) (if (save-restriction (save-excursion (org-narrow-to-subtree) (search-forward ":LOGBOOK:" nil t))) (forward-line) (org-archive-subtree) (goto-char (line-beginning-position)))))) I also found this (written by aculich), which is a step in the right direction, but still requires repeating the function manually: http://stackoverflow.com/questions/7509463/how-to-move-a-subtree-to-another-subtree-in-org-mode-emacs ;; I also wanted a way for org-refile to refile easily to a subtree, so I wrote some code and generalized it so that it will set an arbitrary immediate target anywhere (not just in the same file). ;; Basic usage is to move somewhere in Tree B and type C-c C-x C-m to mark the target for refiling, then move to the entry in Tree A that you want to refile and type C-c C-w which will immediately refile into the target location you set in Tree B without prompting you, unless you called org-refile-immediate-target with a prefix arg C-u C-c C-x C-m. ;; Note that if you press C-c C-w in rapid succession to refile multiple entries it will preserve the order of your entries even if org-reverse-note-order is set to t, but you can turn it off to respect the setting of org-reverse-note-order with a double prefix arg C-u C-u C-c C-x C-m. (defvar org-refile-immediate nil "Refile immediately using `org-refile-immediate-target' instead of prompting.") (make-local-variable 'org-refile-immediate) (defvar org-refile-immediate-preserve-order t "If last command was also `org-refile' then preserve ordering.") (make-local-variable 'org-refile-immediate-preserve-order) (defvar org-refile-immediate-target nil) "Value uses the same format as an item in `org-refile-targets'." (make-local-variable 'org-refile-immediate-target) (defadvice org-refile (around org-immediate activate) (if (not org-refile-immediate) ad-do-it ;; if last command was `org-refile' then preserve ordering (let ((org-reverse-note-order (if (and org-refile-immediate-preserve-order (eq last-command 'org-refile)) nil org-reverse-note-order))) (ad-set-arg 2 (assoc org-refile-immediate-target (org-refile-get-targets))) (prog1 ad-do-it (setq this-command 'org-refile))))) (defadvice org-refile-cache-clear (after org-refile-history-clear activate) (setq org-refile-targets (default-value 'org-refile-targets)) (setq org-refile-immediate nil) (setq org-refile-immediate-target nil) (setq org-refile-history nil)) ;;;###autoload (defun org-refile-immediate-target (&optional arg) "Set current entry as `org-refile' target. Non-nil turns off `org-refile-immediate', otherwise `org-refile' will immediately refile without prompting for target using most recent entry in `org-refile-targets' that matches `org-refile-immediate-target' as the default." (interactive "P") (if (equal arg '(16)) (progn (setq org-refile-immediate-preserve-order (not org-refile-immediate-preserve-order)) (message "Order preserving is turned: %s" (if org-refile-immediate-preserve-order "on" "off"))) (setq org-refile-immediate (unless arg t)) (make-local-variable 'org-refile-targets) (let* ((components (org-heading-components)) (level (first components)) (heading (nth 4 components)) (string (substring-no-properties heading))) (add-to-list 'org-refile-targets (append (list (buffer-file-name)) (cons :regexp (format "^%s %s$" (make-string level ?*) string)))) (setq org-refile-immediate-target heading)))) (define-key org-mode-map "\C-c\C-x\C-m" 'org-refile-immediate-target) It sure would be helpful if aculich, or some other maven, could please create a variable similar to (setq org-archive-location "~/0.todo.org::* Archived Tasks") so users can specify the file and heading, which is already a part of the org-archive-subtree functionality. I'm doing a search and mark because I don't have the wherewithal to create something like org-archive-location for this setup. EDIT: One step closer -- almost home free . . . (defun lawlist-auto-refile () (interactive) (beginning-of-buffer) (re-search-forward "\* UNDATED") (org-refile-immediate-target) ;; cursor must be on a heading to work. (save-excursion (re-search-backward "\* UNDATED") ;; must be written in such a way so that sub-entries of * UNDATED are not searched; or else infinity loop. (while (re-search-backward "\* \\(None\\|Someday\\) " nil t) (org-refile) ) ) )

    Read the article

  • Unable to remove limit on memory usage for PHP script.

    - by Jess Telford
    The Situation I am having an issue with a PHP script getting the following error message: Fatal error: Out of memory (allocated 359923712) (tried to allocate 72 bytes) in /path/to/piwik/core/DataTable.php on line 969 The script I'm running is: /path/to/piwik/misc/cron/archive.sh I am assuming the numbers are Bytes, which means that total is approximately 360MB. For all intents and purposes, I have increased the memory limits on the server well above 360MB, yet this is the number (give or take a byte) it consistently errors out at. Please note: This question is not about fixing a memory leak in the script, nor about why the script itself is using so much memory. The script is part of the Piwik archiving process, so I cannot just fix any memory leaks, etc. For more info on this script and why I am increasing the memory limit, see "How to setup auto archiving" The question Given that the script is attempting to use over 360MB of memory, which I cannot change, why does it not seem possible for me to increase the amount of memory available to php on my server? What I've tried Increasing PHP's memory_limit Given the php.ini file: php -i | grep php.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini I have edited that file, so the memory_limit directive reads; memory_limit = -1 Restart Apache, and check the new value has stuck; $ php -i | grep memory_limit memory_limit => -1 => -1 Run the script, and get the same error. I've also tried 1G, 768M, etc, all to the same result (ie; no change). Update 22nd June: Based on Vangel's help, I have attempted to set post_max_size to 20M in combination with setting memory_limit. Again, this has no effect. Removing the memory limit on child processes of Apache I have found and edited the httpd.conf file to make sure there is no RLimitMEM directive. I then used WHM's Apache Configuration Memory Usage Restrictions to generate a restriction, which it claimed was at 1000M (and confirmed by checking httpd.conf). Both of these resulted in no change to the script erroring at 360MB. Increasing the per process memory limits of Linux The current limits set on the system: $ ulimit -m 524288 $ ulimit -v 524288 I have attempted to set both of these to unlimited: $ ulimit -m unlimited $ ulimit -v unlimited $ ulimit -m unlimited $ ulimit -v unlimited Once again, this has resulted in absolutely no improvement in my problem. My setup $ cat /etc/redhat-release CentOS release 5.5 (Final) $ uname -a Linux example.com 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17 11:30:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux $ php -i | grep "PHP Version" PHP Version => 5.2.9 $ httpd -V Server version: Apache/2.0.63 Server built: Feb 2 2011 01:25:12 Cpanel::Easy::Apache v3.2.0 rev5291 Server's Module Magic Number: 20020903:13 Server loaded: APR 0.9.17, APR-UTIL 0.9.15 Compiled using: APR 0.9.17, APR-UTIL 0.9.15 Architecture: 64-bit Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf" Output of $ php -i: http://pastebin.com/EiRut6Nm

    Read the article

  • Email sent from Centos end up in user spam folder

    - by oObe
    I am facing this issue, I use the default postfix MTA in centos but the mail end up in user spam folder, but this does not seem to be a problem in Debian using exim4, both host have hostname and domain name configured, and relay mail through external smtp host. Both configuration and recieving email header are attached. The different seems that Debian has this additional (envelope tag) and (from) tag other than some minor syntax differences. Any help to resolve is appreciated. The IP address and DNS is masked as follow: 1.2.3.4 = My IP address smtp.host.com = external smtp host for my company [email protected] = account at smtp host centos.abc.com = Local centos server debian.abc.com = Local debian server Thanks. Centos main.cf config with the following params configured myhostname = centos.abc.com mydomain = abc.com myorigin = centos.abc.com relayhost = smtp.host.com Centos - User receiving mail header Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 13:36:49 +0800 Received: by centos.abc.com (Postfix, from userid 0) id 1E0637B89; Fri, 28 Sep 2012 13:36:39 +0800 (SGT) Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 13:36:49 +0800 Received: by centos.abc.com (Postfix, from userid 0) id 1E0637B89; Fri, 28 Sep 2012 13:36:39 +0800 (SGT) Date: Fri, 28 Sep 2012 13:36:39 +0800 To: [email protected] Subject: Test mail from centos User-Agent: Heirloom mailx 12.4 7/29/08 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-Id: <[email protected]> From: [email protected] (root) X-SmarterMail-TotalSpamWeight: 0 X-Antivirus: avast! (VPS 120926-1, 27/09/2012), Inbound message X-Antivirus-Status: Clean http://i.imgur.com/7WAYX.jpg Debain exim4 config .... # This is a Debian specific file dc_eximconfig_configtype='smarthost' dc_other_hostnames='debian.abc.com' dc_local_interfaces='127.0.0.1 ; ::1' dc_readhost='debian.abc.com' dc_relay_domains='smtp.host.com' dc_minimaldns='false' dc_relay_nets='127.0.0.1' dc_smarthost='smtp.host.com' CFILEMODE='644' dc_use_split_config='false' dc_hide_mailname='true' dc_mailname_in_oh='true' dc_localdelivery='mail_spool' debian - User receiving mail header Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 15:02:53 +0800 Received: from root by debian.abc.com with local (Exim 4.72) (envelope-from <[email protected]>) id 1TH86d-00010v-G9 for [email protected]; Thu, 27 Sep 2012 15:01:55 +0800 Return-Path: <[email protected]> Received: from 1.2.3.4 [1.2.3.4] by smtp.host.com with SMTP; Thu, 27 Sep 2012 15:02:53 +0800 Received: from root by debian.abc.com with local (Exim 4.72) (envelope-from <[email protected]>) id 1TH86d-00010v-G9 for [email protected]; Thu, 27 Sep 2012 15:01:55 +0800 Date: Thu, 27 Sep 2012 15:01:55 +0800 Message-Id: <[email protected]> To: [email protected] Subject: Test from debian From: root <[email protected]> X-SmarterMail-TotalSpamWeight: 0 X-Antivirus: avast! (VPS 120926-1, 27/09/2012), Inbound message X-Antivirus-Status: Clean http://imgur.com/nMsMA.jpg

    Read the article

  • Scenarios for Throwing Exceptions

    - by Joe Mayo
    I recently came across a situation where someone had an opinion that differed from mine of when an exception should be thrown. This particular case was an issue opened on LINQ to Twitter for an Exception on EndSession.  The premise of the issue was that the poster didn’t feel an exception should be raised, regardless of authentication status.  As first, this sounded like a valid point.  However, I went back to review my code and decided not to make any changes. Here's my rationale: 1. The exception doesn’t occur if the user is authenticated when EndAccountSession is called. 2. The exception does occur if the user is not authenticated when EndAccountSession is called. 3. The exception represents the fact that EndAccountSession is not able to fulfill its intended purpose - to end the session.  If a session never existed, then it would not be possible to perform the requested action.  Therefore, an exception is appropriate. To help illustrate how to handle this situation, I've modified the following code in Program.cs in the LinqToTwitterDemo project to illustrate the situation: static void EndSession(ITwitterAuthorizer auth) { using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) { try { //Log twitterCtx.Log = Console.Out; var status = twitterCtx.EndAccountSession(); Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } catch (TwitterQueryException tqe) { var webEx = tqe.InnerException as WebException; if (webEx != null) { var webResp = webEx.Response as HttpWebResponse; if (webResp != null && webResp.StatusCode == HttpStatusCode.Unauthorized) Console.WriteLine("Twitter didn't recognize you as having been logged in. Therefore, your request to end session is illogical.\n"); } var status = tqe.Response; Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } } } As expected, LINQ to Twitter wraps the exception in a TwitterQueryException as the InnerException.  The TwitterQueryException serves a very useful purpose through it's Response property.  Notice in the example above that the response has Request and Error proprieties.  These properties correspond to the information that Twitter returns as part of it's response payload.  This is often useful while debugging to help you understand why Twitter was unable to perform the  requested action.  Other times, it's cryptic, but that's another story.  At least you have some way of knowing in your code how to anticipate and handle these situations, along with having extra information to debug with. To sum things up, there are two points to make: when and why an exception should be raised and when to wrap and re-throw an exception in a custom exception type. I felt it was necessary to allow the exception to be raised because the called method was unable to perform the task it was designed for.  I also felt that it is inappropriate for a general library to do anything with exceptions because that could potentially hide a problem from the caller.  A related point is that it should be the exclusive decision of the application that uses the library on what to do with an exception.  Another aspect of this situation is that I wrapped the exception in a custom exception and re-threw.  This is a tough call because I don’t want to hide any stack trace information.  However, the need to make the exception more meaningful by including vital information returned from Twitter swayed me in the direction to design an interface that was as helpful as possible to library consumers.  As shown in the code above, you can dig into the exception and pull out a lot of good information, such as the fact that the underlying HTTP response was a 401 Unauthorized.  In all, trade-offs are seldom perfect for all cases, but combining the fact that the method was unable to perform its intended function, this is a library, and the extra information can be more helpful, it seemed to be the better design. @JoeMayo

    Read the article

  • How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. I ran chkdsk again, this time: chkdsk /f /r. Produced no messages. Same result when deleting. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because these (empty) directories are still not deleteable, C:\1\2\Favorites\Wien\What To Do.. C:\1\2\Favorites\Photography\FIRE Same error as above: Here is what Explorer properties shows for both folders: Update #4 (another partial solution): Using harrymc's answer combined with thoroughly reading through this amazing MS-KB article which contains nearly everyone's idea and then some, inconspicuously titled: You cannot delete a file or a folder on an NTFS file system volume. I was able to delete the 2nd folder C:\1\2\Favorites\Photography\FIRE - the problem being that there was an invisible trailing space at the end. I got lucky when I did an auto-complete whilst playing around with the del "\\?\<path>" command which he suggested. NOTE: A normal del did NOT work, nor did deleting from explorer. Now all that is left is the first directory C:\1\2\Favorites\Wien\What To Do.. (yes I tried endlessly with multiple combinations of the above solution ;) Keep 'em coming! =)

    Read the article

  • Downgrade OpenSSL on CentOS 6.5

    - by byyyk
    Application I use requires OpenSSL 0.9.8, which was already installed (0.9.8e to be specific) on my CentOS alongside 1.0.1e which unfortunately is used by default. I tried to change libssl.so.10 symbolic link to point to the older version like so: [mckulpa@nuance-vm ~]$ ldd /usr/bin/openssl /usr/bin/openssl: /usr/lib64/libssl.so.10: no version information available (required by /usr/bin/openssl) linux-vdso.so.1 => (0x00007fff2edff000) libssl.so.10 => /usr/lib64/libssl.so.10 (0x00007f664457c000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x0000003927600000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x0000003926200000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x0000003925a00000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x0000003926e00000) libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x0000003927200000) libdl.so.2 => /lib64/libdl.so.2 (0x000000391a600000) libz.so.1 => /lib64/libz.so.1 (0x000000391aa00000) libc.so.6 => /lib64/libc.so.6 (0x0000003919e00000) libcrypto.so.6 => /usr/lib64/libcrypto.so.6 (0x00007f664421d000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x0000003925e00000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x0000003926a00000) libresolv.so.2 => /lib64/libresolv.so.2 (0x000000391be00000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000000391a200000) /lib64/ld-linux-x86-64.so.2 (0x0000003919600000) libselinux.so.1 => /lib64/libselinux.so.1 (0x000000391b600000) [mckulpa@nuance-vm ~]$ export LD_LIBRARY_PATH=~/libs:$LD_LIBRARY_PATH [mckulpa@nuance-vm ~]$ echo $LD_LIBRARY_PATH /home/mckulpa/libs:/usr/local/Nuance/Recognizer_Service/amd64/lib:/usr/local/Nuance/OAM/x86/lib:/usr/local/Nuance/Common/x86/lib:/usr/local/Nuance/Common/amd64/lib [mckulpa@nuance-vm ~]$ ldd /usr/bin/openssl /usr/bin/openssl: /home/mckulpa/libs/libssl.so.10: no version information available (required by /usr/bin/openssl) linux-vdso.so.1 => (0x00007fff91dbc000) libssl.so.10 => /home/mckulpa/libs/libssl.so.10 (0x00007ffe1af50000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x0000003927600000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x0000003926200000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x0000003925a00000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x0000003926e00000) libcrypto.so.10 => /usr/lib64/libcrypto.so.10 (0x0000003927200000) libdl.so.2 => /lib64/libdl.so.2 (0x000000391a600000) libz.so.1 => /lib64/libz.so.1 (0x000000391aa00000) libc.so.6 => /lib64/libc.so.6 (0x0000003919e00000) libcrypto.so.6 => /usr/lib64/libcrypto.so.6 (0x00007ffe1abd9000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x0000003925e00000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x0000003926a00000) libresolv.so.2 => /lib64/libresolv.so.2 (0x000000391be00000) libpthread.so.0 => /lib64/libpthread.so.0 (0x000000391a200000) /lib64/ld-linux-x86-64.so.2 (0x0000003919600000) libselinux.so.1 => /lib64/libselinux.so.1 (0x000000391b600000) [mckulpa@nuance-vm ~]$ ls -l libs total 316 -rwxr-xr-x. 1 mckulpa mckulpa 321224 05-28 14:59 libssl.so.0.9.8e lrwxrwxrwx. 1 mckulpa mckulpa 16 05-28 15:18 libssl.so.10 -> libssl.so.0.9.8e but all I get is a warning and still the 1.0.1e version is printed out: [mckulpa@nuance-vm ~]$ openssl version openssl: /home/mckulpa/libs/libssl.so.10: no version information available (required by openssl) OpenSSL 1.0.1e-fips 11 Feb 2013 Any ideas how to do this properly?

    Read the article

  • What is the best server or Ip address to use for prolonged testing?

    - by eldorel
    I usually run uptime/latency tests against (and from) two servers that we own at different sites and until recently I've used the google dns servers as a control group. However, I've realized there is a potential problem with monitoring latency over extended periods of time. Almost all of the major service providers are using ANYCAST. For short tests this doesn't matter, but I need to run a set of tests for at least a week to try and catch an intermittent problem, and a change in the anycast priority while trying to test latency will cause the latency values for that server to change accordingly. Since I'm submitting graphs of this data to the ISP, I need to avoid/account for as many variables as possible. Spikes in the data for only one of the tested servers will only cause headaches. So can anyone recommend servers that: are not using anycast are owned by an entity that has a good uptime reputation (so they can't claim that the problem is server-side) will respond to ICMP requests Have an available service that runs on TCP/UDP (http or dns preferably) Wont consider an automated request every 10 minutes to be abuse Are accessible from anywhere in the world Are not local to the isp ( consider this an investigation of a hostile party ) Thanks in advance. Edit: added #6 and #7 above. More info: I am attempting to demonstrate a network problem for an entire node of our local ISP's network. They are actively blaming the issue on the equipment installed at the customer sites (our backup site is one of these), and refuse to escalate the problem. (even though 2 of these businesses have ISP provided modems, and all of us have completely different routers/services running) I am already quite familiar with the need to test an isp controlled IP, but they are actively dropping all packets targeted at gateway ip addresses and are only passing traffic addressed beyond the gateways. So to demonstrate the issue, I am sending packets to other systems in the same node, systems one hop away from the affected node, and systems completely outside the network. Unfortunately, all of the systems I have currently are either administered directly by myself, or by people who are biased enough to assist me. I need to have several systems included in the trace/log/graphs that are 100% not in the control of either myself or the isp so that the graphs have a stable/unbiased control group. These requirements are straight from legal, I'm just trying to make sure that everything that could be argued to invalidate the data is already covered. In Summary: I need to be able to show tcp/udp/icmp as 3 separate data points, and I need to be able to show the connections inside the local node, from local node to another nearby node, from those 2 nodes to the internet, and through the internet to both verifiable servers and a control group that I have no control over whatsoever. Again, Google/opendns/yahoo/msn/facebook/etc all use anycast, which throws the numbers off every time the anycast caches expire, so I need suggestions of an IP or server that is available for this type of testing. I was hoping someone knew of a system run by someone such as ISC or ICANN, or perhaps even a .gov server (fcc or nsa maybe?) setup for this type of testing. Thanks again.

    Read the article

  • What's wrong with my wireless?

    - by dazzle
    I am having issues with my wireless connection. My connection is constantly disconnecting, then attempting to reconnect, reconnecting momentarily, then disconnecting etc. on times scales that range from seconds to minutes. In the meantime, needless to say I'm having significant packet loss. I'm running Ubuntu 14.04 64bit, updated and upgraded to today. Here is my card and driver: delta@sager:~$ lspci -vq | grep -i wireless -B 1 -A 5 04:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73) Subsystem: Intel Corporation Dual Band Wireless-AC 7260 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at f7d00000 (64-bit, non-prefetchable) [size=8K] Capabilities: Kernel driver in use: iwlwifi Here is my kernel: delta@sager:~$ uname -r 3.13.0-34-generic None of the other machines on my home network are having these issues. Windows Vista is networking without issue for goodness sake ;-) Here is a small clipping from the output of dmesg. As you can see, I am getting a cfg80211 message of some sort over and over again (FYI, I've replaced my MAC address with a series of dashes, so anytime there is a ---------------, that was where the MAC address was: [ 1881.739161] wlan1: authenticate with --------------- [ 1881.741561] wlan1: send auth to --------------- (try 1/3) [ 1881.743440] wlan1: authenticated [ 1881.746027] wlan1: associate with --------------- (try 1/3) [ 1881.749244] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1881.754727] wlan1: associated [ 1881.754827] cfg80211: Calling CRDA for country: US [ 1881.761552] cfg80211: Regulatory domain changed to country: US [ 1881.761559] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1881.761564] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1881.761568] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1881.761571] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761574] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761577] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761580] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1881.761584] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1882.391038] cfg80211: Calling CRDA to update world regulatory domain [ 1882.396254] cfg80211: World regulatory domain updated: [ 1882.396260] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1882.396265] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396268] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396271] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1882.396274] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396277] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.148252] wlan1: authenticate with --------------- [ 1886.150005] wlan1: send auth to --------------- (try 1/3) [ 1886.151807] wlan1: authenticated [ 1886.154847] wlan1: associate with --------------- (try 1/3) [ 1886.158147] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1886.163464] wlan1: associated [ 1886.163520] wlan1: Limiting TX power to 30 (30 - 0) dBm as advertised by --------------- [ 1886.163588] cfg80211: Calling CRDA for country: US [ 1886.170500] cfg80211: Regulatory domain changed to country: US [ 1886.170508] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1886.170513] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1886.170517] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1886.170520] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170523] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170526] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170529] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1886.170533] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1887.200197] cfg80211: Calling CRDA to update world regulatory domain [ 1887.203655] cfg80211: World regulatory domain updated: [ 1887.203659] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1887.203662] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203664] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203666] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1887.203668] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203670] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) I've poked around on AskUbuntu, and have not found any adequate solutions; have also found similar threads that were left unanswered. Any advice/experience/threads I might be able to pull on would be greatly appreciated. In your opinion, is this a kernel issue, hardware issue, etc.? Thanks in advance. EDIT: chili, here's the output of iwconfig: delta@sager:~$ iwconfig wlan1 IEEE 802.11abg ESSID:"LANbeforetime" Mode:Managed Frequency:2.412 GHz Access Point: ----------- Bit Rate=48 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=44/70 Signal level=-66 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:80 Missed beacon:0 eth0 no wireless extensions. lo no wireless extensions.

    Read the article

  • IPv6 host route is deleted after PMTU expires

    - by SAPikachu
    I am experimenting my new IPv6 tunnel setup between my local Ubuntu box and a scratch Linode. I set up some docker containers, configured 6in4 tunnel server and IPv6 forwarding on the Linode: # uname -a Linux argo 3.15.4-x86_64-linode45 #1 SMP Mon Jul 7 08:42:36 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 48: sit-sapikachu: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN group default link/sit 106.185.41.115 peer 1.2.3.4 inet6 fd00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::6ab9:2973/64 scope link valid_lft forever preferred_lft forever 13: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fc00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever // Docker containers are bridged to docker0 On my local box, I configured a 6in4 tunnel interface to connect to the Linode box, and added a host route to one of the docker container: # uname -a Linux sapikachu-netbox 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 16: sit-argo: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default link/sit 0.0.0.0 peer 106.185.41.115 inet6 fd00::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::a97:302/64 scope link valid_lft forever preferred_lft forever inet6 fe80::ac19:1/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fa/64 scope link valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether *** brd ff:ff:ff:ff:ff:ff .. snipped .. inet6 fd00:0:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::2e0:6fff:fe0e:365e/64 scope link valid_lft forever preferred_lft forever # ip route replace fc00::1875:8606:d8c1:8a9d via fd00::1 # Add route to docker container # ip -6 route .. snipped unrelated routes fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires 590sec mtu 1472 fd00::/64 dev sit-argo proto kernel metric 256 fd00:0:1::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sit-argo proto kernel metric 256 (Note that tunnel MTU on my local box is different from the server, this is intentional for testing) After adding the host route to the docker container (fc00::1875:8606:d8c1:8a9d), I can ping the container without problem until the route expires. After that I couldn't get reply any more. If I run ip -6 route in a few seconds after expiration, expiration time of the host route will be a negative number: fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires -1sec And output of ip route get fc00::1875:8606:d8c1:8a9d shows that it is routed to my default IPv6 gateway (which fails to route it correctly of course, since the address is not globally routable). After some time, the host route disappears without a trace. This problem won't happen if I do either one of the following things: Set MTU of tunnel on my local box to be the same as the server (1472). The route won't have expiration time in both ip -6 route and ip route get in this case. Instead of adding a host route, add a route with network mask (even /127 works). In this case ip -6 route shows the route without expiration time, ip route get shows expiration time but it will be correctly refreshed after expiration. Although this problem can be easily resolved, I am curious to know why this happens. Is there error in my configuration, or is this a kernel bug?

    Read the article

  • Can spliting an access database cause printer and reporting issues?

    - by leeand00
    We have a setup in which our users log into an access database using MS Access 2003 over an RDP connection. The user's login to their own machines first using a roaming profile. They then click an rdp connection file on the desktop and login to the remote server, via RDP, where they use MS Access as the shell; they don't have any access to any of explorer.exe features such as the start menu. The database they are logging into is more of an application, and provides functionality for entering data, querying data, and running reports via form based menus. It all worked pretty well until we split the database as it was nearing 2GBs in size. We moved out the payroll data into a separate partition, a database with the same name in a different folder, both of them on the server. Only two tables were moved into this new database partition, and they were re-linked as external tables in the new partition. Now while everything appears to be working fine data-wise after the split, there's a new issue when our users login via RDP and attempt to run reports: often the report will not display and instead the user sees an error about the click event of the form. At first I didn't even know it was printer-related, as we didn't really change anything related to the printers as far as I knew. Confused about the error, I talked to the guy who previously worked here and who was in charge of splitting the database, and he told me to tell the users to set their default printers (on their local machines, not on the server) to the "printer" Microsoft XPS Document Writer which isn't a physical printer at all. This allowed the user's to display their reports, but if they want to print out reports, they are required to go to the File menu and select Print, clicking the print icon on the toolbar takes them to a Save As... dialog as would be expected when using the Microsoft XPS Document Writer as your default printer. It's easy to tell if the user is having a problem because a quick mouseover of the printer icon will yield a tooltip of (none) when they cannot access their reports, and a tooltip of Microsoft XPS Document Writer when they can view the reports. If the user's printer is set to anything other than Microsoft XPS Document Writer as the default on their local machine, then (none) is always displayed when they rdp to the database. The RDP settings are setup to transfer the local printer to the server. Telling the users to do this to print has been more of a band-aid on the whole situation until we find a better solution and an explanation as to why splitting a database would prevent users from printing or even viewing access database reports. Which is why I'm here asking this question. Also of note all the printers on the network now show up on the server so that when the users do click File->Print to print their reports on a physical printer, they have to look through a huge list of printers to find theirs in the dropdown. So the little band-aid fix we have is not ideal. Previously, only the printers on the user's local machine displayed here, and not all the printers on the network. My co-worker seems to think this has something to do with permissions, I personally think it has to do with roaming profiles, and Group Policies which is what I've been reading up on. I really don't know how to fix this or how it is related to splitting the database.

    Read the article

  • how to remove obsolete device and network entries? Device manager "uninstall" option has no effect

    - by Gizmo
    I am trying to remove a few "obsolete" things which annoy me (because I like to have everything cleen, working and not interferring with each other, fresh, etc..). I tried looking for solutions without any help, so here I am to ask. My first part is about removing obsolete networks, let me explain by showing the ipconfig output: C:\windows\system32>ipconfig Windows IP Configuration Wireless LAN adapter Local Area Connection* 11: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Wireless LAN adapter Local Area Connection* 9: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Ethernet adapter LAN: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Wireless LAN adapter Wi-Fi: Connection-specific DNS Suffix . : home Link-local IPv6 Address . . . . . : fe80::c129:8d57:bbd1:3564%10 IPv4 Address. . . . . . . . . . . : 192.168.2.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.2.254 Tunnel adapter isatap.home: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : home C:\windows\system32> Specificalyy the first two adapter entries annoy me because the adapters are not visible in the network connection menu (invisible folder / file visibility set to "show"): And here is the second problem altogether with the first one: No matter what I click/do, Uninstall option has no effect on the multiplexor driver. (bridging stuff, right?) I really want to remove the Wireless LAN adapter Local Area Connection entries and the adapter multiplexor stuff but it's impossible? Why is this? How can I remove them anyway?

    Read the article

  • Facebook with RestFB

    - by Trick
    I just began to use this and I already stumbled on some from-my-side-strange errors. I am using RestFB jar. My problem is, that I can not get my session key (this is the start of everything :)). FacebookClient facebookClient = new DefaultFacebookClient(API_KEY, SECRET_KEY); try { String token = facebookClient.execute("auth.createToken", String.class, Parameter.with("null", "null")); System.out.println(token); String session = facebookClient.execute("auth.getSession", String.class, Parameter.with("auth_token", token)); System.out.println(session); } catch (FacebookException e) { e.printStackTrace(); } I get the token correctly. Parameter.with("null,"null") is there because it demands at least one, but method for creating token doesn't expect any. When trying to get the session key, I get the following error: com.restfb.FacebookResponseStatusException: Received Facebook error response (code 100): Invalid parameter at com.restfb.DefaultFacebookClient.throwFacebookResponseStatusExceptionIfNecessary(DefaultFacebookClient.java:357) at com.restfb.DefaultFacebookClient.makeRequest(DefaultFacebookClient.java:320) at com.restfb.DefaultFacebookClient.execute(DefaultFacebookClient.java:188) at com.restfb.DefaultFacebookClient.execute(DefaultFacebookClient.java:178) Documentation for getting session doesn't say any more parameters are required! Did anybody already try this JAR or do you have any other solution for Java?

    Read the article

  • How to get current Joomla user with external PHP script

    - by Joey Adams
    I have a couple PHP scripts used for AJAX queries, but I want them to be able to operate under the umbrella of Joomla's authentication system. Is the following safe? Are there any unnecessary lines? joomla-auth.php (located in the same directory as Joomla's index.php): <?php define( '_JEXEC', 1 ); define('JPATH_BASE', dirname(__FILE__)); define( 'DS', DIRECTORY_SEPARATOR ); require_once ( JPATH_BASE .DS.'includes'.DS.'defines.php' ); require_once ( JPATH_BASE .DS.'includes'.DS.'framework.php' ); /* Create the Application */ $mainframe =& JFactory::getApplication('site'); /* Make sure we are logged in at all. */ if (JFactory::getUser()->id == 0) die("Access denied: login required."); ?> test.php: <?php include 'joomla-auth.php'; echo 'Logged in as "' . JFactory::getUser()->username . '"'; /* We then proceed to access things only the user of that name has access to. */ ?>

    Read the article

  • Django authentication in django nonrel on GAE

    - by tooba
    I'm using the Django nonrel project on a google app engine project running locally in development. I've created my own models and these are fine when they are saved and retrieved in the datastore. I'm hoping to use django.contrib.auth to provide the user functionality. I can use the shell to create users and these get assigned an ID. When I create one of my own models which references User I have to pass in a user ID as it quite rightly fails otherwise. However, checking via the gae admin interface I can't see the User model in the datastore for the users I've created via the shell. Nor can I retreive the user details from one of my models which references them. Calls to mymodel.user.username return nothing. Nor can I log into admin using the username and password I've set up. I can see saved versions of the models I've made in the gae admin app. I get the impression that users are being created somewhere other than the datastore. Is there something else I need to do to use the standard contrib.auth users with django-nonrel and gae?

    Read the article

  • model not showing up in django admin.

    - by Zayatzz
    Hi. I have ceated several django apps and stuffs for my own fund and so far everything has been working fine. Now i just created new project (django 1.2.1) and have run into trouble from 1st moments. I created new app - game and new model Game. i created admin.py and put related stuff into it. Ran syncdb and went to check into admin. Model did not show up there. I proceeded to check and doublecheck and read through previous similar threads: http://stackoverflow.com/questions/1839927/registered-models-do-not-show-up-in-admin http://stackoverflow.com/questions/1694259/django-app-not-showing-up-in-admin-interface But as far as i can tell, they dont help me either. Perhaps someone else can point this out for me. models.py in game app: # -*- coding: utf-8 -*- from django.db import models class Game(models.Model): type = models.IntegerField(blank=False, null=False, default=1) teamone = models.CharField(max_length=100, blank=False, null=False) teamtwo = models.CharField(max_length=100, blank=False, null=False) gametime = models.DateTimeField(blank=False, null=False) admin.py in game app: # -*- coding: utf-8 -*- from jalka.game.models import Game from django.contrib import admin class GameAdmin(admin.ModelAdmin): list_display = ['type', 'teamone', 'teamtwo', 'gametime'] admin.site.register(Game, GameAdmin) project settings.py: MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', ) ROOT_URLCONF = 'jalka.urls' TEMPLATE_DIRS = ( "/home/projects/jalka/templates/" ) INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.admin', 'game', ) urls.py: from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', # Example: # (r'^jalka/', include('jalka.foo.urls')), (r'^admin/', include(admin.site.urls)), ) Alan.

    Read the article

  • Django attribute error: 'module' object has no attribute 'is_usable'

    - by Robert A Henru
    Hi, I got the following error when calling the url in Django. It's working before, I guess it's related with some accidental changes I made, but I have no idea what they are. Thanks before for the help, Robert Environment: Request Method: GET Request URL: http://localhost:8000/time/ Django Version: 1.2 Python Version: 2.6.1 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.admin', 'djlearn.books'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware') Traceback: File "/Library/Python/2.6/site-packages/django/core/handlers/base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "/Users/rhenru/Workspace/django/djlearn/src/djlearn/../djlearn/views.py" in current_datetime 16. return render_to_response('current_datetime.html',{'current_date':now,}) File "/Library/Python/2.6/site-packages/django/shortcuts/__init__.py" in render_to_response 20. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs) File "/Library/Python/2.6/site-packages/django/template/loader.py" in render_to_string 181. t = get_template(template_name) File "/Library/Python/2.6/site-packages/django/template/loader.py" in get_template 157. template, origin = find_template(template_name) File "/Library/Python/2.6/site-packages/django/template/loader.py" in find_template 128. loader = find_template_loader(loader_name) File "/Library/Python/2.6/site-packages/django/template/loader.py" in find_template_loader 111. if not func.is_usable: Exception Type: AttributeError at /time/ Exception Value: 'module' object has no attribute 'is_usable'

    Read the article

  • What causes the Openid error: Received "invalidate_handle" from server

    - by BryanWheelock
    I'm new to openid, and I am getting an "invalidate_handle" and I have no idea what to do to fix it. I'm using django_authopenid [Thu Apr 29 14:13:28 2010] [error] Generated checkid_setup request to https://www.google.com/accounts/o8/ud with assocication AOxxxxxxxxOX5-V9oDc3-btHhFxzAcccccccccc2RTHgh [Thu Apr 29 14:13:29 2010] [error] Error attempting to use stored discovery information: <openid.consumer.consumer.TypeURIMismatch: Required type http://specs.openid.net/auth/2.0/signon not found in ['http://specs.openid.net/auth/2.0/server', 'http://openid.net/srv/ax/1.0', 'http://specs.openid.net/extensions/ui/1.0/mode/popup', 'http://specs.openid.net/extensions/ui/1.0/icon', 'http://specs.openid.net/extensions/pape/1.0'] for endpoint <openid.consumer.discover.OpenIDServiceEndpoint server_url='https://www.google.com/accounts/o8/ud' claimed_id=None local_id=None canonicalID=None used_yadis=True >> [Thu Apr 29 14:13:29 2010] [error] Attempting discovery to verify endpoint [Thu Apr 29 14:13:29 2010] [error] Performing discovery on https://www.google.com/accounts/o8/id?id=AOxxxxxxxxOX5-V9oDc3-btHhFxzAcccccccccc2RTHgh [Thu Apr 29 14:13:29 2010] [error] Received id_res response from https://www.google.com/accounts/o8/ud using association AOxxxxxxxxOX5-V9oDc3-btHhFxzAcccccccccc2RTHgh [Thu Apr 29 14:13:29 2010] [error] Using OpenID check_authentication [Thu Apr 29 14:13:29 2010] [error] op_endpoint [Thu Apr 29 14:13:29 2010] [error] claimed_id [Thu Apr 29 14:13:29 2010] [error] identity [Thu Apr 29 14:13:29 2010] [error] return_to [Thu Apr 29 14:13:29 2010] [error] response_nonce [Thu Apr 29 14:13:29 2010] [error] assoc_handle [Thu Apr 29 14:13:29 2010] [error] Received "invalidate_handle" from server https://www.google.com/accounts/o8/ud

    Read the article

  • Facebook with RestFB (Java)

    - by Trick
    I just began to use this and I already stumbled on some from-my-side-strange errors. I am using RestFB jar. My problem is, that I can not get my session key (this is the start of everything :)). FacebookClient facebookClient = new DefaultFacebookClient(API_KEY, SECRET_KEY); try { String token = facebookClient.execute("auth.createToken", String.class, Parameter.with("null", "null")); System.out.println(token); String session = facebookClient.execute("auth.getSession", String.class, Parameter.with("auth_token", token)); System.out.println(session); } catch (FacebookException e) { e.printStackTrace(); } I get the token correctly. Parameter.with("null,"null") is there because it demands at least one, but method for creating token doesn't expect any. When trying to get the session key, I get the following error: com.restfb.FacebookResponseStatusException: Received Facebook error response (code 100): Invalid parameter at com.restfb.DefaultFacebookClient.throwFacebookResponseStatusExceptionIfNecessary(DefaultFacebookClient.java:357) at com.restfb.DefaultFacebookClient.makeRequest(DefaultFacebookClient.java:320) at com.restfb.DefaultFacebookClient.execute(DefaultFacebookClient.java:188) at com.restfb.DefaultFacebookClient.execute(DefaultFacebookClient.java:178) Documentation for getting session doesn't say any more parameters are required! Did anybody already try this JAR or do you have any other solution for Java? EDIT: Does the application needs to be "published" in searchers to be able to do this??

    Read the article

  • how to synchronize application email to server email using java mail in android

    - by Akash
    i want to change synchronously change in email application then automatic change in server email. For example :- i have read the unread message on email application then automatic server email change unread mail to read mail. my email application has use mail jar file, activation.jar and additional jar file use and following code are use for connectivity email application to server email.. Properties props = System.getProperties(); props.setProperty("mail.store.protocol", "imaps"); props.put("mail.smtp.starttls.enable","true"); Authenticator auth = new Authenticator() { protected PasswordAuthentication getPasswordAuthentication(){ return new PasswordAuthentication("USEREMAILID","PASSWORD "); } }; sessioned= Session.getDefaultInstance(props, auth); store = sessioned.getStore("imaps"); // store.connect("imap.next.mail.yahoo.com","[email protected]","123456789"); store.connect("smtp.gmail.com","USEREMAILID","PASSWORD "); inbox = store.getFolder("inbox"); inbox.open(Folder.READ_ONLY); FlagTerm ft = new FlagTerm(new Flags(Flags.Flag.SEEN), false); UNReadmessages = inbox.search(ft);

    Read the article

  • Multiple Jira instances on a single Tomcat 6 server?

    - by davidhayes
    I have a feeling this is a stupid question but I can't find the answer anywhere... I need to deploy 2 Jira instances on asingle Tomcat server, I can't figure out how to pass in the jira.home property The documentation says I need to:- Add a web context property called 'jira.home' — this property is set in different files depending on your application server. For example, for Tomcat (and therefore for JIRA Standalone), you will need to configure the server.xml file. For other application servers you may need to configure the web.xml file, or set 'Context parameter' options on the deployment UI of the application server, etc. Note that If you have specified a JIRA home in jira-application.properties (ie. the recommended method), it will override your web context property. I was hoping something like this would work. <Context jira.home="d:/jira/data" path="" docBase="D:\Jira\atlassian-jira-enterprise-4.1\dist-tomcat\tomcat-6\atlassian-jira-4.1.war" debug="0"> <Resource name="jdbc/JiraDS" auth="Container" type="javax.sql.DataSource" username="sa" password="*****" driverClassName="net.sourceforge.jtds.jdbc.Driver" url="jdbc:jtds:sqlserver://*****:1433/jira41_519;user=****;password=****" /> <Resource name="UserTransaction" auth="Container" type="javax.transaction.UserTransaction" factory="org.objectweb.jotm.UserTransactionFactory" jotm.timeout="60"/> <Manager pathname=""/> </Context> Any ideas??

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >