Search Results

Search found 527 results on 22 pages for 'a ha'.

Page 17/22 | < Previous Page | 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • /etc/security/limits.conf for setting program limits in Linux

    - by Flavius Akerele
    I have the following inside /etc/security/limits.conf (I have specified root separately because * will not include it.) user2 - core unlimited * - core 0 root - core 0 * - rss 512000 root - rss 512000 * - nproc 100 root - nproc 100 * - maxlogins 1 root - maxlogins 1 I run a program as user2 (./programname) but /proc/3498/limits says cores are disabled: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Both ulimit -Sa and ulimit -Ha output that cores are disabled: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14001 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 512000 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 100 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Why are cores disabled ?

    Read the article

  • Multimaster Keepalived Configuration (Virtual IP with Load Balancing)

    - by Rad Akefirad
    Here are requirements: 1. High Availability 2. Load Balancing First configuration 1. Two linux servers have been configured with one static IP for each: 10.17.243.11, 10.17.243.12 2. Keepalived has been installed and configured with one VRRP instance to provide one virtual IP (10.17.243.10 as VIP, 10.17.243.11 as master and 10.17.243.12 as backup). 3. Everything works fine. The VIP is assigned to the master server (10.17.243.11) as long as it is up and running. As soon as it goes down, the VIP will be assigned to the backup server (10.17.243.12). 4. The problem here is all communication goes to the master server. Second configuration 1. I found active-active configuration for Keepalived which is possible by defining more than one VRRP instance. So that both server have two IPs (real 10.17.243.11 and virtual 10.17.243.10 for server #1 and real 10.17.243.12 and virtual 10.17.243.20 for server #2. 2. Everything works fine. we have two VIPs which are accessible (HA). But all communication coming to each IP still goes to one single machine (either server #1 or #2 depending on the IP). However I found some tricks on the DNS to overcome this limitation. But it's not acceptable in our case. Question: Is there any way to have one virtual IP which is assigned to both servers? By that I mean both servers are handling some part of workload (like the thing we do in web server load balancing)? By using either keepalived or some other tools? Thanks in advance.

    Read the article

  • Proxmox drbd configuration split brain [on hold]

    - by AudioDan
    I am planning a proxmox HA configuration with two Dell R710 machines (dual 6 core processors in each) with enterprise level drive raid arrays. I would be using DRBD with a quorum disk on a third machine. I would dedicate two 1GB nics on each server to the DRBD communications. We would have approximately 12 to 14 Virtual Machines running on this pair of servers. The proxmox manual recommends creating two DRBD resources - one for the Virtual Machines that normally run on ServerA and one for the Virtual Machines that normally run on ServerB. This is because of the Primary/Primary state in which this configuration runs. If both servers have VMs talking to the same DRBD resource and a split brain situation occurs, there is potential for data corruption that must be resolved. While I understand it would take more effort to create new virtual machines, can anybody foresee any potential problems with running a separate DRBD resource for each VM instead? Does anyone have experience running a setup that way and has it worked well? It seems to me that would allow more flexibility in moving machines back and forth.

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Heartbeat/DRBD failover didn't work as expected. How do I make the failover more robust?

    - by Quinn Murphy
    I had a scenario where a DRBD-heartbeat set up had a failed node but did not failover. What happened was the primary node had locked up, but didn't go down directly (it was inaccessible via ssh or with the nfs mount, but it could be pinged). The desired behavior would have been to detect this and failover to the secondary node, but it appears that since the primary didn't go full down (there is a dedicated network connection from server to server), heartbeat's detection mechanism didn't pick up on that and therefore didn't failover. Has anyone seen this? Is there something that I need to configure to have more robust cluster failover? DRBD seems to otherwise work fine (had to resync when I rebooted the old primary), but without good failover, it's use is limited. heartbeat 3.0.4 drbd84 RHEL 6.1 We are not using Pacemaker nfs03 is the primary server in this setup, and nfs01 is the secondary. ha.cf # Hearbeat Logging logfacility daemon udpport 694 ucast eth0 192.168.10.47 ucast eth0 192.168.10.42 # Cluster members node nfs01.openair.com node nfs03.openair.com # Hearbeat communication timing. # Sets the triggers and pulse time for swapping over. keepalive 1 warntime 10 deadtime 30 initdead 120 #fail back automatically auto_failback on and here is the haresources file: nfs03.openair.com IPaddr::192.168.10.50/255.255.255.0/eth0 drbddisk::data Filesystem::/dev/drbd0::/data::ext4 nfs nfslock

    Read the article

  • Turn computer into DAS (Direct Attached Storage)

    - by Damon
    Can we build a direct attached storage by taking a computer/server, adding an HBA, and installing some appropriate software? We would use Debian as a host OS for both the DAS and the server. If so, what software do we use? And do we simply need a HBA for the DAS and the Server? Or do we need more hardware? The goal is to use an older server that does not have enough room for drives but does have ECC memory, server processors, redundant power supplies, dual nics, etc. Then find any boxes, server or not, the key being having enough room for 8-12 drives, fans, etc. and turning them into a DAS; build two of these DAS's and have them connected to the server. Eventually we want to have two servers using DRBD and associated services like heartbeat and pace maker to create an HA setup for our server(s) but that will take a long time to configure since I have no experience with anything related to DRBD (yet) and have a learning curve I have to get past, not to mention the additional cost of more hardware (two servers vs one).

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • DRBD as a block device for XEN VM (Centos 5.3)

    - by SaberTooth
    Hi all, I have setup a drbd resource between 2 server nodes - everything works correctly when doing sync tests between the two. (I want to create a HA cluster using drbd,xen and heartbeat) However, when I try and create a XEN VM with Centos as guest operating system, I get through to the partitioning screen on the install but when I select a partitioning type the next screen gives me the following error : "An error has occurred - no valid devices were found on which to create new file systems. Please check your hardware for the cause of this problem." This is the first time attempting create a setup like this and searching Google does not help much... my config files for DRBD and XEN.... DRBD (just the section that is pertinent) on xennode0 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; flexible-meta-disk internal; } on xennode1 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; meta-disk internal; } XEN kernel = "/boot/xeninstall/vmlinuz" ramdisk = "/boot/xeninstall/initrd.img" extra = "text" name = "VM" maxmem = 3000 memory = 3000 vcpus = 4 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] disk = [ "phy:/dev/drbd0,sda1,w", "tap:aio:/srv/xen/xenswap.img,sda2,w" ] vif = [ "mac=00:16:3e:11:67:ae,bridge=xenbr0" ] root = "/dev/sda1 ro" Thanks in advance!

    Read the article

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

  • ImageMagick installation confusion

    - by Codemonkey
    Well this is turning out to be a pain. I'm on CentOS 6.5. I saw on the PECL ImageMagick changelog that they added a load of constants for new filters, such as LANCZOS2SHARP. Some testing I did earlier suggested that PhotoShop 6.5 was able to downsize photos with better clarity than my current ImageMagick's best effort of LANCZOS, so I thought I'd try to upgrade to test out the newer filters. So, first port of call - get source from pecl for 3.2.0RC1. Installed with no problems. But, ah-ha. Although it says it only requires IM 6.2.4, the filters I'm after don't work unless you have 6.6.6+ So I go http://www.imagemagick.org/script/install-source.php to install the newest version. This is the bit that's puzzled me. It all appears to have installed fined. The tests work fine, passing 40 out of 40. If I run identify -version on the command line it outputs 6.8.9 But if I echo Imagick::getVersion() in PHP it shows 6.5.4, even after restarting php-fpm. rpm -qa | grep ImageMagick shows that I still have 6.5.4 installed locate ImageMagic also only seems to show the 6.5.4 one I feel that the missing link here is ImageMagick-devel, do I need to install that too? How do I go about doing that? Or do I just need to reinstall the pecl-imagemagick 3.2.0RC1 now that I have the latest IM installed?

    Read the article

  • trying to use mod_proxy with httpd and tomcat

    - by techsjs2012
    I been trying to use mod_proxy with httpd and tomcat... I have on VirtualBox running Scientific Linux which has httpd and tomcat 6 on it.. I made two nodes of tomcat6. I followed this guide like 10 times and still cant get the 2nd node of tomcat working.. http://www.richardnichols.net/2010/08/5-minute-guide-clustering-apache-tomcat/ Here is the lines from my http.conf file <Proxy balancer://testcluster stickysession=JSESSIONID> BalancerMember ajp://127.0.0.1:8009 min=10 max=100 route=node1 loadfactor=1 BalancerMember ajp://127.0.0.1:8109 min=10 max=100 route=node2 loadfactor=1 </Proxy> ProxyPass /examples balancer://testcluster/examples <Location /balancer-manager> SetHandler balancer-manager AuthType Basic AuthName "Balancer Manager" AuthUserFile "/etc/httpd/conf/.htpasswd" Require valid-user </Location> Now here is my server.xml from node1 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8005" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node1"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> now here is the server.xml file from node2 <?xml version='1.0' encoding='utf-8'?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/server.html --> <Server port="8105" shutdown="SHUTDOWN"> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Prevent memory leaks due to use of particular java/javax APIs--> <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node2"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> </Host> </Engine> </Service> </Server> I dont know what it is. but I been trying for days

    Read the article

  • Is Berkeley DB XML a viable database backend?

    - by w00t
    Apparently, BDB-XML has been around since at least 2003 but I only recently stumbled upon it on Oracle's website: Berkeley DB XML. Here's the blurb: Oracle Berkeley DB XML is an open source, embeddable XML database with XQuery-based access to documents stored in containers and indexed based on their content. Oracle Berkeley DB XML is built on top of Oracle Berkeley DB and inherits its rich features and attributes. Like Oracle Berkeley DB, it runs in process with the application with no need for human administration. Oracle Berkeley DB XML adds a document parser, XML indexer and XQuery engine on top of Oracle Berkeley DB to enable the fastest, most efficient retrieval of data. To me it seems that the underlying ideas are technically sound and probably more mature than the newer document-based DBs like CouchDB or MongoDB. It has support for C, C++, Ruby and Perl, as far as I can determine. It even has HA-capabilities like automatic replication using a master/slave model with automatic election. However, I can't seem to find any projects that use it. Is there something fundamentally wrong with it? Is the license too onerous? Is it too complicated? Why is it not being used?

    Read the article

  • What are the primitive Forth operators?

    - by Barry Brown
    I'm interested in implementing a Forth system, just so I can get some experience building a simple VM and runtime. When starting in Forth, one typically learns about the stack and its operators (DROP, DUP, SWAP, etc.) first, so it's natural to think of these as being among the primitive operators. But they're not. Each of them can be broken down into operators that directly manipulate memory and the stack pointers. Later one learns about store (!) and fetch (@) which can be used to implement DUP, SWAP, and so forth (ha!). So what are the primitive operators? Which ones must be implemented directly in the runtime environment from which all others can be built? I'm not interested in high-performance; I want something that I (and others) can learn from. Operator optimization can come later. (Yes, I'm aware that I can start with a Turing machine and go from there. That's a bit extreme.) Edit: What I'm aiming for is akin to bootstrapping an operating system or a new compiler. What do I need do implement, at minimum, so that I can construct the rest of the system out of those primitive building blocks? I won't implement this on bare hardware; as an educational exercise, I'd write my own minimal VM.

    Read the article

  • (C#) Get index of current foreach iteration

    - by Graphain
    Hi, Is there some rare language construct I haven't encountered (like the few I've learned recently, some on Stack Overflow) in C# to get a value representing the current iteration of a foreach loop? For instance, I currently do something like this depending on the circumstances: int i=0; foreach (Object o in collection) { ... i++; } Answers: @bryansh: I am setting the class of an element in a view page based on the position in the list. I guess I could add a method that gets the CSSClass for the Objects I am iterating through but that almost feels like a violation of the interface of that class. @Brad Wilson: I really like that - I've often thought about something like that when using the ternary operator but never really given it enough thought. As a bit of food for thought it would be nice if you could do something similar to somehow add (generically to all IEnumerable objects) a handle on the enumerator to increment the value that an extension method returns i.e. inject a method into the IEnumerable interface that returns an iterationindex. Of course this would be blatant hacks and witchcraft... Cool though... @crucible: Awesome I totally forgot to check the LINQ methods. Hmm appears to be a terrible library implementation though. I don't see why people are downvoting you though. You'd expect the method to either use some sort of HashTable of indices or even another SQL call, not an O(N) iteration... (@Jonathan Holland yes you are right, expecting SQL was wrong) @Joseph Daigle: The difficulty is that I assume the foreach casting/retrieval is optimised more than my own code would be. @Jonathan Holland: Ah, cheers for explaining how it works and ha at firing someone for using it.

    Read the article

  • How Do I make a simple .htaccess internal redirect Catch All script while forwarding POST data?

    - by RB
    I just want to catch all requests and forward them internally to my catchall page with all POST data intact Catch all page: http://www.mydomain.com/addons/redirect/catch-all.php I've tried so many combinations, but my server doesn't want to redirect internally if I specify more than catch-all.php # Internally redirect all pages to "Catch" page Options +FollowSymLinks RewriteEngine on RewriteRule (.*) /addons/redirect/catch-all.php [L] Also, do I need [L] or is it useless for internal redirects? Then, what php code would I use to grab the POST data, use it, and finally PHP redirect the page to the originally requested page Would it be done just as normal by using $_POST['variable_name']; or something different? Then, how would I go about calling the originally requested page, so I can tell PHP to header location direct them to that page? Thanks! UPDATE: Ha sick, nevermind. The condition DOES work. Here's my code: # Internally redirect all pages to "Catch" page Options +FollowSymLinks RewriteEngine on RewriteCond %{REQUEST_URI} !^/robots.txt$ RewriteCond %{REQUEST_URI} !\.(gif¦jpe?g¦png¦css¦js¦pdf¦doc¦xml)$ RewriteCond %{REQUEST_URI} !^/addons/redirect/catch-all\.php$ RewriteRule (.*)$ /addons/redirect/catch-all.php?q=$1 [L] Thanks guys for the inspiration! Now time to get that PHP to work...

    Read the article

  • StringTokenizer split at "<br/>"

    - by AnAmuser
    Maybe I am stupid but I don't understand why the behaviour of StringTokenizer here: import static org.apache.commons.lang.StringEscapeUtils.escapeHtml; String object = (String) value; String escaped = escapeHtml(object); StringTokenizer tokenizer = new StringTokenizer(escaped, escapeHtml("<br/>")); If fx. value is Hej<br/>$user.get(0).name Har vundet<br/><table border='1'><tr><th>Name</th><th>Played</th><th>Brewed</th></tr>#foreach( $u in $user )<tr><td>$u.name</td> <td>$u.played</td> <td>$u.brewed</td></tr>#end</table><br/> Then the result is Hej $use . e (0).name Ha vunde a e o de ='1' h Name h h P ayed h h B ewed h #fo each( $u in $use ) d $u.name d d $u.p ayed d d $u. ewed d #end a e It makes no sense to me. How can I make it behave as I expect to.

    Read the article

  • XSP2 crashes serving static images

    - by Dawkins
    Hi, Requesting a simple HTML page with a jpg image makes XSP2 crash. If I remove the image from the HTML then the page is served OK all the time. The version is XSP2 2.0 mono 2.6.1. the version 2.4.2.2 in the same machine works fine. I have tested it in two different machines, both Windows Vista Business SP1. Anyone has experienced the same? Any clue of what can be the problem? Below is the stack trace displayed by the console: (The line in Spanish says "it has been forced the interruption of an existing connection by the remote host") EDIT: since another user is having the same problem I have submited a bug to Novell and created a litle zip to reproduce the problem: https://bugzilla.novell.com/show_bug.cgi?id=582162 Peer unexpectedly closed the connection on write. Closing our end. System.IO.IOException: Write failure ---> System.Net.Sockets.SocketException: Se ha forzado la interrupción de una conexión existente por el host remoto. at System.Net.Sockets.Socket.Send (System.Byte[] buf, Int32 offset, Int32 size , SocketFlags flags) [0x00000] in <filename unknown>:0 at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 at Mono.WebServer.XSPWorker.Write (System.Byte[] buffer, Int32 position, Int32 size) [0x00000] in <filename unknown>:0 Peer unexpectedly closed the connection on write. Closing our end. System.ObjectDisposedException: The object was used after being disposed. at System.Net.Sockets.NetworkStream.CheckDisposed () [0x00000] in <filename un known>:0 at System.Net.Sockets.NetworkStream.Write (System.Byte[] buffer, Int32 offset, Int32 size) [0x00000] in <filename unknown>:0 at Mono.WebServer.XSPWorker.Write (System.Byte[] buffer, Int32 position, Int32 size) [0x00000] in <filename unknown>:0 Thank you.

    Read the article

  • Accessing items separated by -componentsSeparatedByString

    - by Graeme
    Hi, I have an array gathered by componentsSeparatedByString: that looks like the following when I use po in the GDB after the array has gone through componentsSeparatedByString: "\n\t\t <b>Suburb, </b> BAIRNSDALE", "\n\t\t <b>Address, </b> 15K NW BAIRNSDALE", "\n\t\t <b>Reference, </b> MELWOOD/SCHOOL ROAD", "\n\t\t <b>Last Changed, </b> 09/04/10 05, 29, 00 PM", "\n\t\t <b>Type, </b> HOME", "\n\t\t <b>Status, </b> BUILT", "\n\t\t <b>Property Size, </b> 2.00 HA.", "\n\t\t <b>Residents, </b> 2", "\n\t\t <b>First Added Date/Time, </b> 09/04/10 03, 15, 00 PM", "\n\t\t\t" Only problem is, I now can't figure out where to go from here. I need to be able to access each of these items (i.e. type, status, property size) separately rather than just calling the entire array (i.e. currentProperty.status). How do I do this? Also what's with all the n\t\t\t things - how do I get rid of them? Thanks.

    Read the article

  • FindControl in DataList Edit Mode

    - by Doug
    As a new .net/C# web begginner, I always get tripped up when I try to use FindControl. Blam -flat on my face. Here is my current FindControl problem: I have an .aspx page and Form, then ajax updatePanel, inside it there is my DataList (DataList1) that has an EditItemTemplate: that has the following: <EditItemTemplate> <asp:Label ID="thumbnailUploadLabel" runat="server" text="Upload a new thumbnail image:"/><br /> <asp:FileUpload ID="thumbnailImageUpload" runat="server" /> <asp:Button ID="thunbnailImageUploadButton" runat="server" Text="Upload Now" OnClick="thumbnailUpload"/><br /> </EditItemTemplate> In my C# code behind I have the OnClick code for the fileUpload object: protected void thumbnailUpload(object s, EventArgs e) { if (thumbnailImageUpload.HasFile) { //get name of the file & upload string imageName = thumbnailImageUpload.FileName; thumbnailImageUpload.SaveAs(MapPath("../../images/merch_sm/" + imageName)); //let'em know that it worked (or didn't) thumbnailUploadLabel.Text = "Image " + imageName + "has been uploaded."; } else { thumbnailUploadLabel.Text = "Please choose a thumbnail image to upload."; } So of course I'm getting "Object reference not set to an instance of an object" for the FileUpload and the Label. What is the correct syntax to find these controls, before dealing with them in the OnClick event? The only way Ive used FindControl is something like: label thumbnailUploadLabel = DataList1.FindControl("thumbnailUploadLabel") as Label; But of course this is throwing the "Object reference not set to an instance of an object" error. Any help is very much appreciated. (I've also seen the 'recursive' code out there that is supposed to make using FindControl easier. Ha! I'm so green at C# that I don't even know how to incorporate those into my project.) Thanks to all for taking a look at this.

    Read the article

  • Removing specific XML tags

    - by iTayb
    I'd like to make an application that removes duplicates from my wpl (Windows PlayList) files. The wpl struct is something like this: <?wpl version="1.0"?> <smil> <head> <meta name="Generator" content="Microsoft Windows Media Player -- 11.0.5721.5145"/> <meta name="AverageRating" content="55"/> <meta name="TotalDuration" content="229844"/> <meta name="ItemCount" content="978"/> <author/> <title>english</title> </head> <body> <seq> <media src="D:\Anime con 2006\Shits\30 Seconds to Mars - 30 Seconds to Mars\30 Seconds to Mars - Capricorn.mp3" tid="{BCC6E6B9-D0F3-449C-91A9-C6EEBD92FFAE}" cid="{D38701EF-1764-4331-A332-50B5CA690104}"/> <media src="E:\emule.incoming\Ke$ha - Coming Unglued.mp3" tid="{E2DB18E5-0449-4FE3-BA09-9DDE18B523B1}"/> <media src="E:\emule.incoming\Lady Gaga - Bad Romance.mp3" tid="{274BD12B-5E79-4165-9314-00DB045D4CD8}"/> <media src="E:\emule.incoming\David Guetta -Sexy Bitch Feat. Akon.mp3" tid="{46DA1363-3DFB-4030-A7A9-88E13DF30677}"/> </seq> </body> </smil> This looks like standard XML file. How can I load the file and get the src value of each media tag? How can I remove specific media, in case of duplicates? Thank you very much.

    Read the article

  • NoClassDefFoundError: HttpClient 4 (APACHE)

    - by Pujan Srivastava
    Hello All, I am using HC APACHE. I have added both httpcore-4.0.1.jar and httpclient-4.0.1.jar in the classpath of netbeans. I am getting error: java.lang.NoClassDefFoundError: org/apache/http/impl/client/DefaultHttpClient My Code is as follows. Please help. import org.apache.http.client.HttpClient; import org.apache.http.client.ResponseHandler; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.BasicResponseHandler; import org.apache.http.impl.client.DefaultHttpClient; public class HttpClientManager { public HttpClient httpclient; public HttpClientManager() { this.init(); } public void init() { try { httpclient = new DefaultHttpClient(); } catch (Exception e) { e.printStackTrace(); } } public void getCourseList() { String url = "http://exnet.in.th/api.php?username=demoinst&ha=2b62560&type=instructor"; HttpGet httpget = new HttpGet(url); ResponseHandler<String> responseHandler = new BasicResponseHandler(); try { String responseBody = httpclient.execute(httpget, responseHandler); System.out.println(responseBody); } catch (Exception e) { } } }

    Read the article

  • Jboss 6 Cluster Singleton Clustered

    - by DanC
    I am trying to set up a Jboss 6 in a clustered environment, and use it to host clustered stateful singleton EJBs. So far we succesfully installed a Singleton EJB within the cluster, where different entrypoints to our application (through a website deployed on each node) point to a single environment on which the EJB is hosted (thus mantaining the state of static variables). We achieved this using the following configuration: Bean interface: @Remote public interface IUniverse { ... } Bean implementation: @Clustered @Stateful public class Universe implements IUniverse { private static Vector<String> messages = new Vector<String>(); ... } jboss-beans.xml configuration: <deployment xmlns="urn:jboss:bean-deployer:2.0"> <!-- This bean is an example of a clustered singleton --> <bean name="Universe" class="Universe"> </bean> <bean name="UniverseController" class="org.jboss.ha.singleton.HASingletonController"> <property name="HAPartition"><inject bean="HAPartition"/></property> <property name="target"><inject bean="Universe"/></property> <property name="targetStartMethod">startSingleton</property> <property name="targetStopMethod">stopSingleton</property> </bean> </deployment> The main problem for this implementation is that, after the master node (the one that contains the state of the singleton EJB) shuts down gracefuly, the Singleton's state is lost and reset to default. Please note that everything was constructed following the JBoss 5 Clustering documents, as no JBoss 6 documents were found on this subject. Any information on how to solve this problem or where to find JBoss 6 documention on clustering is appreciated.

    Read the article

  • Java website protection solutions (especially XSS)

    - by Mark
    I'm developing a web application, and facing some security problems. In my app users can send messages and see other's (a bulletin board like app). I'm validating all the form fields that users can send to my app. There are some very easy fields, like "nick name", that can be 6-10 alpabetical characters, or message sending time, which is sended to the users as a string, and then (when users ask for messages, that are "younger" or "older" than a date) I parse this with SimpleDateFormat (I'm developing in java, but my question is not related to only java). The big problem is the message field. I can't restrict it to only alphabetical characters (upper or lowercase), because I have to deal with some often use characters like ",',/,{,} etc... (users would not be satisfied if the system didn't allow them to use these stuff) According to this http://ha.ckers.org/xss.html, there are a lot of ways people can "hack" my site. But I'm wondering, is there any way I can do to prevent that? Not all, because there is no 100% protection, but I'd like a solution that can protect my site. I'm using servlets on the server side, and jQuery, on the client side. My app is "full" AJAX, so users open 1 JSP, then all the data is downloaded and rendered by jQuery using JSON. (yeah, I know it's not "users-without-javascript" friendly, but it's 2010, right? :-) ) I know front end validation is not enough. I'd like to use 3 layer validation: - 1. front end, javascript validate the data, then send to the server - 2. server side, the same validation, if there is anything, that shouldn't be there (because of client side javascript), I BAN the user - 3. if there is anything that I wasn't able to catch earlier, the rendering process handle and render appropriately Is there any "out of the box" solution, especially for java? Or other solution that I can use?

    Read the article

  • Database schema for Product Properties

    - by Chemosh
    As so many people I'm looking for a Products /Product Properties database schema. I'm using Ruby on Rails and (Thinking) Sphinx for faceted searches. Requirements: Adding new product types and their options should not require a change to the database schema Support faceted searches using Sphinx. Solutions I've come across: (See Bill Karwin's answer) Option 1: Single Table Inheritance Not an option really. The table would contain way to many columns. Option 2: Class Table Inheritance Ruby on Rails caches the database schema on start-up which means a restart whenever a new type of product is introduced. If you have a size able product catalog this could mean hundreds of tables. Option 3: Serialized LOB Kills being able to do faceted searches without heavy application logic. Option 4: Entity-Attribute-Value For testing purposes, EAV worked fine. However it could quickly become a mess and a maintenance hell as you add more and more options (e.g. when an option increase the prices or delivery time). What option should I go with? What other solutions are out there? Is there a silver bullet (ha) I overlooked?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22  | Next Page >