Search Results

Search found 12281 results on 492 pages for 'ip blocking'.

Page 312/492 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Ubuntu 12.04 open port 80 inside WLAN

    - by Eduard
    I have an nginx server running on ubuntu 12.04 that serves http through port 80 and https through port 443. Everything works fine if I access it from the same computer via localhost, 127.0.0.1 or the local IP 192.168.0.11. If I try to access the server from another computer in the same VLAN it does not work for http; it works for https. I have changed my nginx configuration to also listen to port 8000 for http; I can then access http from the other computer in the same VLAN via "http://192.168.0.11:8000". I also have a web server running on port 80 on a windows machine and can access it from another device in the same VLAN, therefore the router is not blocking incoming http traffic. The nginx process is run by root. I have used tcpdump and I see that packets are arriving to Ubuntu: 192.168.0.16.49735 192.168.0.11.80 and that some response is being given 192.168.0.11.80 192.168.0.16.49735 (I do not know what the response is though). There is no request arriving at the nginx web server (I have checked the access log). I have iptables empty. I have unsuccessfully tried to find a solution for a long time to this, it has now become a matter of happiness or bitterness :).

    Read the article

  • Processing: popen() call to start subprocess?

    - by Mark Harrison
    Here's an example of opening and reading data from a TCP socket. Is there a a popen() call or equivalent that can start a child process and read its output? void setup() { c = new Client(this, "127.0.0.1", 12345); // Replace with your server's IP and port } void draw() { if (c.available() > 0) { input = c.readString(); http://www.processing.org/learning/libraries/sharedcanvasclient.html

    Read the article

  • powershell function output to variable

    - by tommy
    I have a function in powershell 2.0 named getip which gets the IP address(es) of a remote system. function getip { $strComputer = "computername" $colItems = GWMI -cl "Win32_NetworkAdapterConfiguration" -name "root\CimV2" -comp $strComputer -filter "IpEnabled = TRUE" ForEach ($objItem in $colItems) {Write-Host $objItem.IpAddress} } The problem I'm having is with getting the output of this function to a variable. The folowing doesn't work... $ipaddress = (getip) $ipaddress = getip set-variable -name ipaddress -value (getip) any help with this problem would be greatly appreciated.

    Read the article

  • Getting a visitors Facebook page

    - by Mr Carl
    Hey guys, this is more of a question out of curiosity, but is it possible to get somebody's Facebook page after they have visited your site? Was thinking maybe a chain of lookup stuff could be used starting with an IP to eventually perhaps get a name and thus that person's Facebook page. I have also heard you can read somebody's web history, is this true?

    Read the article

  • Apache Mod SVN Access Forbidden

    - by Cerin
    How do you resolve the error svn: access to '/repos/!svn/vcc/default' forbidden? I recently upgraded a Fedora 13 server to 16, and now I'm trying to debug an access error with a Subversion server running on using Apache with mod_dav_svn. Running: svn ls http://myserver/repos/myproject/trunk Lists the correct files. But when I go to commit, I get the error: svn: access to '/repos/!svn/vcc/default' forbidden My Apache virtualhost for svn is: <VirtualHost *:80> ServerName svn.mydomain.com ServerAlias svn DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Location /repos> Order allow,deny Allow from all DAV svn SVNPath /var/svn/repos SVNAutoversioning On # Authenticate with Kerberos AuthType Kerberos AuthName "Subversion Repository" KrbAuthRealms mydomain.com Krb5KeyTab /etc/httpd/conf/krb5.HTTP.keytab # Get people from LDAP AuthLDAPUrl ldap://ldap.mydomain.com/ou=people,dc=mydomain,dc=corp?uid # For any operations other than these, require an authenticated user. <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> </Location> </VirtualHost> What's causing this error? EDIT: In my /var/log/httpd/error_log I'm seeing a lot of these: [Fri Jun 22 13:22:51 2012] [error] [client 10.157.10.144] ModSecurity: Warning. Operator LT matched 20 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity.d/base_rules/modsecurity_crs_60_correlation.conf"] [line "31"] [msg "Inbound Anomaly Score (Total Inbound Score: 15, SQLi=, XSS=): Method is not allowed by policy"] [hostname "svn.mydomain.com"] [uri "/repos/!svn/act/0510a2b7-9bbe-4f8c-b928-406f6ac38ff2"] [unique_id "T@Sp638DCAEBBCyGfioAAABK"] I'm not entirely sure how to read this, but I'm interpreting "Method is not allowed by policy" as meaning that there's some security Apache module that might be blocking access. How do I change this?

    Read the article

  • closing MQ connection

    - by OakvilleWork
    Good afternoon, I wrote a project to Get Park Queue Info from the IBM MQ, it has producing an error when attempting to close the connection though. It is written in java. Under application in Event Viewer on the MQ machine it displays two errors. They are: “Channel program ended abnormally. Channel program ‘system.def.surconn’ ended abnormally. Look at previous error messages for channel program ‘system.def.surconn’ in the error files to determine the cause of the failure. The other message states: “Error on receive from host rnanaj (10.10.12.34) An error occurred receiving data from rnanaj (10.10.12.34) over tcp/ip. This may be due to a communications failure. The return code from tcp/ip recv() call was 10054 (X’2746’). Record these values.” This must be something how I try to connect or close the connection, below I have my code to connect and close, any ideas?? Connect: _logger.info("Start"); File outputFile = new File(System.getProperty("PROJECT_HOME"), "run/" + this.getClass().getSimpleName() + "." + System.getProperty("qmgr") + ".txt"); FileUtils.mkdirs(outputFile.getParentFile()); Connection jmsConn = null; Session jmsSession = null; QueueBrowser queueBrowser = null; BufferedWriter commandsBw = null; try { // get queue connection MQConnectionFactory MQConn = new MQConnectionFactory(); MQConn.setHostName(System.getProperty("host")); MQConn.setPort(Integer.valueOf(System.getProperty("port"))); MQConn.setQueueManager(System.getProperty("qmgr")); MQConn.setChannel("SYSTEM.DEF.SVRCONN"); MQConn.setTransportType(JMSC.MQJMS_TP_CLIENT_MQ_TCPIP); jmsConn = (Connection) MQConn.createConnection(); jmsSession = jmsConn.createSession(false, Session.AUTO_ACKNOWLEDGE); Queue jmsQueue = jmsSession.createQueue("PARK"); // browse thru messages queueBrowser = jmsSession.createBrowser(jmsQueue); Enumeration msgEnum = queueBrowser.getEnumeration(); commandsBw = new BufferedWriter(new FileWriter(outputFile)); // String line = "DateTime\tMsgID\tOrigMsgID\tCorrelationID\tComputerName\tSubsystem\tDispatcherName\tProcessor\tJobID\tErrorMsg"; commandsBw.write(line); commandsBw.newLine(); while (msgEnum.hasMoreElements()) { Message message = (Message) msgEnum.nextElement(); line = dateFormatter.format(new Date(message.getJMSTimestamp())) + "\t" + message.getJMSMessageID() + "\t" + message.getStringProperty("pkd_orig_jms_msg_id") + "\t" + message.getJMSCorrelationID() + "\t" + message.getStringProperty("pkd_computer_name") + "\t" + message.getStringProperty("pkd_subsystem") + "\t" + message.getStringProperty("pkd_dispatcher_name") + "\t" + message.getStringProperty("pkd_processor") + "\t" + message.getStringProperty("pkd_job_id") + "\t" + message.getStringProperty("pkd_sysex_msg"); _logger.info(line); commandsBw.write(line); commandsBw.newLine(); } } Close: finally { IO.close(commandsBw); if (queueBrowser != null) { try { queueBrowser.close();} catch (Exception ignore) {}} if (jmsSession != null) { try { jmsSession.close();} catch (Exception ignore) {}} if (jmsConn != null) { try { jmsConn.stop();} catch (Exception ignore) {}} }

    Read the article

  • How can I 'git clone' from another machine

    - by hap497
    Hi, In 1 machine (ip 192.168.1.2), I create a git repository by $ cd /home/hap/working $ git init $ (add some files) $ git add . $ git commit -m 'Initial commit' And I have another machine in the same WiFi network, How can I get clone from the other machine? Thank you

    Read the article

  • .net: does anyone know a free library to download nntp messages

    - by stighy
    hi folks, i'm using ip*works for download newsgroup message and insert them into a database... ipworks is "comfortable" to use because it automatically split nntp data return into different object (or variable). So i've "author" "date" "topic" "messagge" already splitted to my use. However ipworks isn't free and it cost a lot for my use.. so i'm asking you some component or code snippet to use to download and "manage" nntp messagges. Thanks in advance and regards!

    Read the article

  • Cannot Call WordPress Plugin Files Under wp-content

    - by Volomike
    I have a client who has many blog customers. Each of these WordPress blogs calls a plugin that provides a product link. The way that link is composed looks like this: {website}/wp-content/plugins/prodx/product?id=432320 This works fine on all blogs except two. On those two, when you try to call the URL, you get a 404. So, I disabled all plugins except prodx and reverted the theme to default (Kubrick), thinking perhaps a plugin intercept with add_action() API was doing this, such as intercepting URLs and redirecting them. However, this did not help. So, I upgraded the WordPress to the latest version. Again, didn't fix. So, I checked permissions, comparing with a blog that worked just fine. Again, didn't fix. So I replaced the .htaccess, using one from a working blog. Again, didn't fix. So I replaced all the files using some from a working blog that was identical to this one, and then restored the wp-config.php file back so that it talked to the right blog database. Again, didn't fix. Again I checked permissions meticulously, comparing to a perfectly working blog. Again, didn't fix. So, I created a test.php that looks like so: <?php print_r($_GET); echo "hello world"; I then copied it into another plugin folder and used my browser to get to it -- again, 404. So I copied it into the root of wp-content/plugins and tried to call it there -- again, 404. So I copied it into wp-content -- again, 404. Last, I copied it into the root of the WordPress blog website, and this time, it worked! Doesn't make sense. I started to think that perhaps something was going on with /etc/httpd/conf/httpd.conf for this customer, but the only thing I saw different in their for this customer was the IP address was different than the customer's blog that worked. Each customer gets their own IP in this environment my client has built. My client sysop is baffled too. What do you think is going on? Is there something wrong in the WP database for this customer? Is there something wrong in httpd.conf?

    Read the article

  • Redirect to a web server depending on location with nginx

    - by fceruti
    Im working on a web site that has to be reachable from many countries under the same domain. Id like to know how can I receive a request with nginx (or any other static file server), and send it to different web servers depending on IP's location. I mean, what is the point on having multiple db machines on country A and B, if the server that serves you the page is chosen by round robin. Maybe theres another solution to my problem, and I would be very happy if someone can explain it to me.

    Read the article

  • Why can't I see all of the client certificates available when I visit my web site locally on Windows 7 IIS 7?

    - by Jay
    My team has recently moved to Windows 7 for our developer machines. We are attempting to configure IIS for application testing. Our application requires SSL and client certificates in order to authenticate. What I've done: I have configured IIS to require SSL and require (and tried accept) certificates under SSL Settings. I have created the https binding and set it to the proper server certificate. I've installed all the root and intermediate chain certificates for the soft certificates properly in current user and local machine stores. The problem When I browse to the web site, the SSL connection is established and I am prompted to choose a certificate. The issue is that the certificate is one that is created by my company that would be invalid for use in the application. I am not given the soft certificates that I have installed using MMC and IE. We are able to utilize the soft certs from our development machines to our Windows 2008 servers that host the application. What I did: I have attempted to copy the Root CA to every folder location for the Current User and Location Machine account stores that the company certificate's root is in. My questions: Could I be mishandling the certs anywhere else? Could there be a local/group policy that could be blocking the other certs from use? What (if anything) should have to be done differently on Windows 7 from 2008 in regards to IIS? Thanks for your help.

    Read the article

  • selecting data from table based on date .

    - by mehdi
    i have database table like this +-------+--------------+----------+ | id | ip | date | +-------+--------------+----------+ | 505 |192.168.100.1 |2010-04-03| | 252 |192.168.100.5 |2010-03-03| | 426 |192.168.100.6 |2010-03-03| | 201 |192.168.100.7 |2010-04-03| | 211 |192.168.100.10|2010-04-03| +-------+--------------+----------+ how can i retirive data from this table where month=03 how to write sql to do that . select * from table where month=03 something like that .

    Read the article

  • Exchange 2010 Transport rules stepping on each other

    - by TopHat
    I have a group of users that I have to restrict email access for and so far using Exchange Transport Rules has worked very well. The problem I am having is that Rule 0 is supposed to bcc the email to a review mailbox but otherwise not change anything and Rule 9 is supposed to block the email and throw a custom NDR to tell the user why they were blocked. Here are my results in practice however. If Rule 0 is enabled and Rule 9 is enabled then only Rule 9 functions If Rule 0 is disabled and Rule 9 is enabled then Rule 9 functions If Rule 0 is enabled and Rule 9 is disabled then Rule 0 functions This is after the Transport Service has been restarted (multiple times actually). I have other rule pairs that work correctly. None of these are overlapping rulesets however. - copy email going to address outside domain and then block - copy email coming in from outside and then block Here is the rule for copying internal emails (Rule 0): Apply rule to messages from a member of Blind carbon copy (Bcc) the message to except when the message is sent to a member of or [email protected] Here is the rule to block the same email (rule 9): Apply rule to messages from a member of send 'Email to non-supervisors or managers has been prohibited. Please contact your supervisor for more information.' to sender with 5.7.420 except when the message is sent to , [email protected], The distribution group used for membership in these rules is used for the other blocking and copying rules and works as expected. Is there something I missed in this setup? All of the copy rules are at the front of the transport rule group and all the actual copies at at the end of the queue if that makes a difference. Any thoughts as to why the email doesn't get copied when it gets blocked?

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • How to add active directory with sharepoint in the Virutal server ?

    - by pointlesspolitics
    I have a main server with windows server 2008 with active directory installed. Additionally, I have created the hyper-v virtual server with MOSS 2007 installed with dynamic ip address. I can access the sharepoint site as an intranet. How can I assign the access of all the active directory users and their profile to MOSS without adding them up manually ? If I am missing any information to provide please mention.

    Read the article

  • Having issues with setting up Inbound Email in SP2010

    - by Bill Daugherty
    I am having issues with setting up the Inbound Email with SP 2010. I have enabled the settings in Central Admin for Inbound Email, set up an MX record, added the IP to my Exchange Server, then created a new doc-lib in SP and i am still not seeing the "Incoming e-mail settings" option under communications in the doc-lib setup screen. Can someone let me know what I may be doing wrong, or missing?

    Read the article

  • JBoss https on port other than 8080 not working

    - by MilindaD
    We have a server with two JBoss instances where one runs on 8080, the other on 8081. We need to have HTTPS enabled for the 8081 server, firstly we tried enabling https on the 8080 port instance by generating the keystore and editing the server.xml and it successfully worked. However when we tried the same thing for 8081 it did not, note that we removed https for the 8080 server first before enabling it for 8081. This is what was used for both server.xml for 8080 and 8081. The only difference was that the port was changed from 8080 to 8081 when trying to enable https for 8081 port instance. What am I doing wrong and what needs to be changed? NOTE : When I meant enabled for 8080 I meant when you visit https:// URL:8484 you will actually be visiting the 8080 port instance. However when ssl is enabled for 8081 and I visit https:// URL:8484 I get that the web page is unavailable. COMMENTLESS VERSION <Server> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Service name="jboss.web"> <!-- https --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- https1 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server> WITH COMMENTS VERSION <Server> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Use a custom version of StandardService that allows the connectors to be started independent of the normal lifecycle start to allow web apps to be deployed before starting the connectors. --> <Service name="jboss.web"> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="${jboss.server.home.dir}/conf/zara.keystore" keystorePass="zara2010" clientAuth="false" sslProtocol="TLS" compression="on" /> --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <!-- The JAAS based authentication and authorization realm implementation that is compatible with the jboss 3.2.x realm implementation. - certificatePrincipal : the class name of the org.jboss.security.auth.certs.CertificatePrincipal impl used for mapping X509[] cert chains to a Princpal. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles --> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <!-- A subclass of JBossSecurityMgrRealm that uses the authentication behavior of JBossSecurityMgrRealm, but overrides the authorization checks to use JACC permissions with the current java.security.Policy to determine authorized access. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles <Realm className="org.jboss.web.tomcat.security.JaccAuthorizationRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> --> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <!-- Uncomment to enable request dumper. This Valve "logs interesting contents from the specified Request (before processing) and the corresponding Response (after processing). It is especially useful in debugging problems related to headers and cookies." --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve" /> --> <!-- Access logger --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" prefix="localhost_access_log." suffix=".log" pattern="common" directory="${jboss.server.log.dir}" resolveHosts="false" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host. Does not provide SSO across a cluster. If this valve is used, do not use the JBoss ClusteredSingleSignOn valve shown below. A new configuration attribute is available beginning with release 4.0.4: cookieDomain configures the domain to which the SSO cookie will be scoped (i.e. the set of hosts to which the cookie will be presented). By default the cookie is scoped to "/", meaning the host that presented it. Set cookieDomain to a wider domain (e.g. "xyz.com") to allow an SSO to span more than one hostname. --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host AND to all other hosts in the cluster. If this valve is used, do not use the standard Tomcat SingleSignOn valve shown above. Valve uses a JBossCache instance to support SSO credential caching and replication across the cluster. The JBossCache instance must be configured separately. By default, the valve shares a JBossCache with the service that supports HttpSession replication. See the "jboss-web-cluster-service.xml" file in the server/all/deploy directory for cache configuration details. Besides the attributes supported by the standard Tomcat SingleSignOn valve (see the Tomcat docs), this version also supports the following attributes: cookieDomain see above treeCacheName JMX ObjectName of the JBossCache MBean used to support credential caching and replication across the cluster. If not set, the default value is "jboss.cache:service=TomcatClusteringCache", the standard ObjectName of the JBossCache MBean used to support session replication. --> <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <!-- Check for unclosed connections and transaction terminated checks in servlets/jsps. Important: The dependency on the CachedConnectionManager in META-INF/jboss-service.xml must be uncommented, too --> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server>

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >