Search Results

Search found 5583 results on 224 pages for 'global nomad'.

Page 110/224 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Remote site AD design (2003)

    - by Boy Mars
    A remote site has about 25 of our 50-ish employees. They have their own AD domain presently (2003) but I want to look at getting them onto the same global domain for ease of access/administration. The remote site has a VPN link but line speeds are very poor. I am already aware of tools like ADMT and have done a few migrations in the past (NT/2003 domains), but this is the first time I have the luxury of designing how this domain is organised. So I'm looking for tips on good AD design; would a remote site be better served as a sub-domain? would this reduce traffic? I am only currently looking at 2003 since only existing machine will be used.

    Read the article

  • Cisco ASA - Enable communication between same security level

    - by Conor
    I have recently inherited a network with a Cisco ASA (running version 8.2). I am trying to configure it to allow communication between two interfaces configured with the same security level (DMZ-DMZ) "same-security-traffic permit inter-interface" has been set, but hosts are unable to communicate between the interfaces. I am assuming that some NAT settings are causing my issue. Below is my running config: ASA Version 8.2(3) ! hostname asa enable password XXXXXXXX encrypted passwd XXXXXXXX encrypted names ! interface Ethernet0/0 switchport access vlan 400 ! interface Ethernet0/1 switchport access vlan 400 ! interface Ethernet0/2 switchport access vlan 420 ! interface Ethernet0/3 switchport access vlan 420 ! interface Ethernet0/4 switchport access vlan 450 ! interface Ethernet0/5 switchport access vlan 450 ! interface Ethernet0/6 switchport access vlan 500 ! interface Ethernet0/7 switchport access vlan 500 ! interface Vlan400 nameif outside security-level 0 ip address XX.XX.XX.10 255.255.255.248 ! interface Vlan420 nameif public security-level 20 ip address 192.168.20.1 255.255.255.0 ! interface Vlan450 nameif dmz security-level 50 ip address 192.168.10.1 255.255.255.0 ! interface Vlan500 nameif inside security-level 100 ip address 192.168.0.1 255.255.255.0 ! ftp mode passive clock timezone JST 9 same-security-traffic permit inter-interface same-security-traffic permit intra-interface object-group network DM_INLINE_NETWORK_1 network-object host XX.XX.XX.11 network-object host XX.XX.XX.13 object-group service ssh_2220 tcp port-object eq 2220 object-group service ssh_2251 tcp port-object eq 2251 object-group service ssh_2229 tcp port-object eq 2229 object-group service ssh_2210 tcp port-object eq 2210 object-group service DM_INLINE_TCP_1 tcp group-object ssh_2210 group-object ssh_2220 object-group service zabbix tcp port-object range 10050 10051 object-group service DM_INLINE_TCP_2 tcp port-object eq www group-object zabbix object-group protocol TCPUDP protocol-object udp protocol-object tcp object-group service http_8029 tcp port-object eq 8029 object-group network DM_INLINE_NETWORK_2 network-object host 192.168.20.10 network-object host 192.168.20.30 network-object host 192.168.20.60 object-group service imaps_993 tcp description Secure IMAP port-object eq 993 object-group service public_wifi_group description Service allowed on the Public Wifi Group. Allows Web and Email. service-object tcp-udp eq domain service-object tcp-udp eq www service-object tcp eq https service-object tcp-udp eq 993 service-object tcp eq imap4 service-object tcp eq 587 service-object tcp eq pop3 service-object tcp eq smtp access-list outside_access_in remark http traffic from outside access-list outside_access_in extended permit tcp any object-group DM_INLINE_NETWORK_1 eq www access-list outside_access_in remark ssh from outside to web1 access-list outside_access_in extended permit tcp any host XX.XX.XX.11 object-group ssh_2251 access-list outside_access_in remark ssh from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group ssh_2229 access-list outside_access_in remark http from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group http_8029 access-list outside_access_in remark ssh from outside to internal hosts access-list outside_access_in extended permit tcp any host XX.XX.XX.13 object-group DM_INLINE_TCP_1 access-list outside_access_in remark dns service to internal host access-list outside_access_in extended permit object-group TCPUDP any host XX.XX.XX.13 eq domain access-list dmz_access_in extended permit ip 192.168.10.0 255.255.255.0 any access-list dmz_access_in extended permit tcp any host 192.168.10.29 object-group DM_INLINE_TCP_2 access-list public_access_in remark Web access to DMZ websites access-list public_access_in extended permit object-group TCPUDP any object-group DM_INLINE_NETWORK_2 eq www access-list public_access_in remark General web access. (HTTP, DNS & ICMP and Email) access-list public_access_in extended permit object-group public_wifi_group any any pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu public 1500 mtu dmz 1500 mtu inside 1500 no failover icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 60 global (outside) 1 interface global (dmz) 2 interface nat (public) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 2229 192.168.0.29 2229 netmask 255.255.255.255 static (inside,outside) tcp interface 8029 192.168.0.29 www netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.13 192.168.10.10 netmask 255.255.255.255 dns static (dmz,outside) XX.XX.XX.11 192.168.10.30 netmask 255.255.255.255 dns static (dmz,inside) 192.168.0.29 192.168.10.29 netmask 255.255.255.255 static (dmz,public) 192.168.20.30 192.168.10.30 netmask 255.255.255.255 dns static (dmz,public) 192.168.20.10 192.168.10.10 netmask 255.255.255.255 dns static (inside,dmz) 192.168.10.0 192.168.0.0 netmask 255.255.255.0 dns access-group outside_access_in in interface outside access-group public_access_in in interface public access-group dmz_access_in in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.9 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.0.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh 192.168.0.0 255.255.255.0 inside ssh timeout 20 console timeout 0 dhcpd dns 61.122.112.97 61.122.112.1 dhcpd auto_config outside ! dhcpd address 192.168.20.200-192.168.20.254 public dhcpd enable public ! dhcpd address 192.168.0.200-192.168.0.254 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics host threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 130.54.208.201 source public webvpn ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect ip-options inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp !

    Read the article

  • How to make a folder (D:\xyz) accessible to only me in Windows-XP?

    - by claws
    Hello, I'm using Windows XP on my lab computer. There is a global folder (d:\xyz). This is my folder and I want this folder to be accessible to only me. It should be invisible even if it is visible they shouldn't be able to open this folder. For now my account has administrative privilages. After few days, I don't know if the Admin lets me have these privilages or not. I heard that soon our XP machines will be upgraded to either vista or windows 7. Will the method of making folder in accessible change for other Windows OSes? How to accomplish this?

    Read the article

  • 3rd monitor with Dell Precision 490

    - by Animesh Kumar
    I am trying to go for a 3 monitor setup on my workstation, but need some information before I can invest in a new graphics card and 2 monitors. Would be glad to get some help here: I have a Dell Precision 490 workstation with Nvidia Quadro FX 4500 graphics card (PCI-E 16) on it. It has 2 DVI-D ports that can support 2 monitors upto 2560x1600 resolution. For the third monitor, I was looking at Nvidia NVS 295 card, however I am not sure if it is physically possible to hook another card to the motherboard. Here is the spec of Precision 490: http://www.dell.com/downloads/global/products/precn/en/spec_precn_490_en.pdf Can I attach NVS 295 to it? If not, what options do I have?

    Read the article

  • Wordpress add_meta_box() weirdness

    - by Scott B
    The code below is working nearly flawlessly, however my value for page title on one of my pages keeps coming up empty after a few page refreshes... It sticks for awhile, then it appears to reset to empty. I'm thinking I must have a conflict in the code below, but I can't quite figure it. I'm allowing the user to set a custom page title for posts as well as pages via a custom "post/page title input field). Can anyone see an obvious issue here that might be resetting the page title to blank? // =================== // = POST OPTION BOX = // =================== add_action('admin_menu', 'my_post_options_box'); function my_post_options_box() { if ( function_exists('add_meta_box') ) { //add_meta_box( $id, $title, $callback, $page, $context, $priority ); add_meta_box('post_header', 'Custom Post Header Code (optional)', 'custom_post_images', 'post', 'normal', 'low'); add_meta_box('post_title', 'Custom Post Title', 'custom_post_title', 'post', 'normal', 'high'); add_meta_box('post_title_page', 'Custom Post Title', 'custom_post_title', 'page', 'normal', 'high'); add_meta_box('postexcerpt', __('Excerpt'), 'post_excerpt_meta_box', 'page', 'normal', 'core'); add_meta_box('categorydiv', __('Page Options'), 'post_categories_meta_box', 'page', 'side', 'core'); } } //Adds the custom images box function custom_post_images() { global $post; ?> <div class="inside"> <textarea style="height:70px; width:100%;margin-left:-5px;" name="customHeader" id="customHeader"><?php echo get_post_meta($post->ID, 'customHeader', true); ?></textarea> <p>Enter your custom html code here for the post page header/image area. Whatever you enter here will override the default post header or image listing <b>for this post only</b>. You can enter image references like so &lt;img src='wp-content/uploads/product1.jpg' /&gt;. To show default images, just leave this field empty</p> </div> <?php } //Adds the custom post title box function custom_post_title() { global $post; ?> <div class="inside"> <p><input style="height:25px;width:100%;margin-left:-10px;" type="text" name="customTitle" id="customTitle" value="<?php echo get_post_meta($post->ID, 'customTitle', true); ?>"></p> <p>Enter your custom post/page title here and it will be used for the html &lt;title&gt; for this post page and the Google link text used for this page.</p> </div> <?php } add_action('save_post', 'custom_add_save'); function custom_add_save($postID){ // called after a post or page is saved if($parent_id = wp_is_post_revision($postID)) { $postID = $parent_id; } if ($_POST['customHeader']) { update_custom_meta($postID, $_POST['customHeader'], 'customHeader'); } else { update_custom_meta($postID, '', 'customHeader'); } if ($_POST['customTitle']) { update_custom_meta($postID, $_POST['customTitle'], 'customTitle'); } else { update_custom_meta($postID, '', 'customTitle'); } } function update_custom_meta($postID, $newvalue, $field_name) { // To create new meta if(!get_post_meta($postID, $field_name)){ add_post_meta($postID, $field_name, $newvalue); }else{ // or to update existing meta update_post_meta($postID, $field_name, $newvalue); } } ?>

    Read the article

  • How to make security group in one forest show up in another forest?

    - by Jake
    I have two Win2k8 forests which I do maintenance on. The two forests have full 2 way external, non transitive trust with each other. I have a folder in forest X, domain countryX.mycompany.com accessible ONLY by the global security group named $group. In forest Y, domain countryY.mycompany.com, countryY\user1, countryY\user2 etc needs to have access to the folder. The natural instinct is to put user1, user2 etc into the $group. However, none of the methods for adding user to group works as it appears that the AD cannot find the groups in the other forest. Question: 1.How to make forests see each other's security groups and be able to add? 2.In practice, what is the recommended way to achieve the user access to the folders/files in another forest?

    Read the article

  • Linux policy routing - packets not coming back

    - by Bugsik
    i am trying to set up policy routing on my home server. My network looks like this: Host routed VPN gateway Internet link through VPN 192.168.0.35/24 ---> 192.168.0.5/24 ---> 192.168.0.1 DSL router 10.200.2.235/22 .... .... 10.200.0.1 VPN server The traffic from 192.168.0.32/27 should be and is routed through VPN. I wanted to define some routing policies to route some traffic from 192.168.0.5 through VPN as well - for start - from user with uid 2000. Policy routing is done using iptables mark target and ip rule fwmark. The problem: When connecting using user 2000 from 192.168.0.5 tcpdump shows outgoing packets, but nothing comes back. Traffic from 192.168.0.35 works fine (here I am not using fwmark but src policy). Here is my VPN gateway setup: # uname -a Linux placebo 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:49:02 UTC 2012 i686 i686 i386 GNU/Linux # iptables -V iptables v1.4.12 # ip -V ip utility, iproute2-ss111117 IPtables rules (all policies in table filter are ACCEPT) # iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 770K packets, 314M bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 767K packets, 312M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 5520 packets, 1920K bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 782K packets, 901M bytes) pkts bytes target prot opt in out source destination 74 4707 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner UID match 2000 MARK set 0x3 Chain POSTROUTING (policy ACCEPT 788K packets, 903M bytes) pkts bytes target prot opt in out source destination # iptables -t nat -nvL Chain PREROUTING (policy ACCEPT 996 packets, 51172 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 7 packets, 432 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1364 packets, 112K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 2302 packets, 160K bytes) pkts bytes target prot opt in out source destination 119 7588 MASQUERADE all -- * vpn 0.0.0.0/0 0.0.0.0/0 Routing: # ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lan state UNKNOWN qlen 1000 link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff valid_lft forever preferred_lft forever 3: lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:40:63:f9:c3:8f brd ff:ff:ff:ff:ff:ff inet 192.168.0.5/24 brd 192.168.0.255 scope global lan inet6 fe80::240:63ff:fef9:c38f/64 scope link valid_lft forever preferred_lft forever 4: vpn: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100 link/none inet 10.200.2.235/22 brd 10.200.3.255 scope global vpn # ip rule show 0: from all lookup local 32764: from all fwmark 0x3 lookup VPN 32765: from 192.168.0.32/27 lookup VPN 32766: from all lookup main 32767: from all lookup default # ip route show table VPN default via 10.200.0.1 dev vpn 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 # ip route show default via 192.168.0.1 dev lan metric 100 10.200.0.0/22 dev vpn proto kernel scope link src 10.200.2.235 192.168.0.0/24 dev lan proto kernel scope link src 192.168.0.5 TCP dump showing no traffic coming back when connection is made from 192.168.0.5 user 2000 # tcpdump -i vpn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vpn, link-type RAW (Raw IP), capture size 65535 bytes ### Traffic from user 2000 on 192.168.0.5 ### 10:19:05.629985 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6887764 ecr 0,nop,wscale 4], length 0 10:19:21.678001 IP 10.200.2.235.37291 > 10.100-78-194.akamai.com.http: Flags [S], seq 2868799562, win 14600, options [mss 1460,sackOK,TS val 6891776 ecr 0,nop,wscale 4], length 0 ### Traffic from 192.168.0.35 ### 10:23:12.066174 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [S], seq 2294159276, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 557451322 ecr 0,sackOK,eol], length 0 10:23:12.265640 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [S.], seq 2521908813, ack 2294159277, win 14480, options [mss 1367,sackOK,TS val 388565772 ecr 557451322,nop,wscale 1], length 0 10:23:12.276573 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [.], ack 1, win 8214, options [nop,nop,TS val 557451534 ecr 388565772], length 0 10:23:12.293030 IP 10.200.2.235.49247 > 10.100-78-194.akamai.com.http: Flags [P.], seq 1:480, ack 1, win 8214, options [nop,nop,TS val 557451552 ecr 388565772], length 479 10:23:12.574773 IP 10.100-78-194.akamai.com.http > 10.200.2.235.49247: Flags [.], ack 480, win 7776, options [nop,nop,TS val 388566081 ecr 557451552], length 0

    Read the article

  • Keep context-configuration when redeploying via Cargo

    - by Björn Pollex
    I am using Tomcat 7 to host a web-application that requires a JNDI datasource to be set up. Because this resource is specific to this application, I would like to configure it inside the application-specific context-descriptor in $CATALINA_BASE/conf/[enginename]/[hostname]/. I am also using Cargo from Maven to deploy the web-application to Tomcat. The problem is that when I do a redeploy with Cargo, it first undeploys the application, before deploying it again. When undeploying it, Tomcat deletes the context-descriptor of the application, so it won't work after redeploying. I could of course package the context-descriptor with the application, but I would like to keep any such container-specifics out of the .war. Another alternative is to configure the datasource in the global context-descriptor, but that too seems wrong, because the datasource is supposed to be exclusive to my application. Is my approach fundamentally wrong? What is the best practice here? Is there any way to prevent Tomcat from deleting the descriptor when undeploying?

    Read the article

  • Nginx + Haproxy + Thin + Rails - 503 Service Unavailable -

    - by Luca G. Soave
    I don't know how troubleshoot this. I get "503 Service Unavailable" http error for all "nginx upstreams" proxy passing calls to haproxy fast_thin and slow_thin ( server 127.0.0.1:3100 and server 127.0.0.1:3200 ), which loadbalance on 6 Thin servers ( 127.0.0.1:3000 .. 3005 ). Static files like /blog are currently fine. The falldown is: nginx on port 80 - haproxy on 3100 and 3200 - thin on 3000 .. 3005 and then Rails. Here it is /etc/nginx/nginx.conf : user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; include /etc/nginx/conf.d/*.conf; } then /etc/nginx/conf.d/default.conf upstream fast_thin { server 127.0.0.1:3100; } upstream slow_thin { server 127.0.0.1:3200; } server { listen 80; server_name www.gitwatcher.com; rewrite ^/(.*) http://gitwatcher.com/$1 permanent; } server { listen 80; server_name gitwatcher.com; access_log /var/www/gitwatcher/log/access.log; error_log /var/www/gitwatcher/log/error.log; root /var/www/gitwatcher/public; # index index.html; location /about { proxy_pass http://fast_thin; break; } location /trends { proxy_pass http://slow_thin; break; } location /categories { proxy_pass http://slow_thin; break; } location /signout { proxy_pass http://slow_thin; break; } location /auth/github { proxy_pass http://slow_thin; break; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://slow_thin; break; } } } then haproxy config file /etc/haproxy/haproxy.cfg : global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet nbproc 1 # number of processing cores defaults log global retries 3 maxconn 2000 contimeout 5000 mode http clitimeout 60000 # maximum inactivity time on the client side srvtimeout 30000 # maximum inactivity time on the server side timeout connect 4000 # maximum time to wait for a connection attempt to a server to succeed option httplog option dontlognull option redispatch option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight stats enable # enable web-stats at /haproxy?stats stats auth haproxy:pr0xystats # force HTTP Auth to view stats stats refresh 5s # refresh rate of stats page listen rails_proxy 127.0.0.1:3100 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:3000 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3001 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3002 weight 1 minconn 3 maxconn 6 check inter 20000 listen slow_proxy 127.0.0.1:3200 # cluster for slow requests, lower the queues, check less frequently server slow1 127.0.0.1:3003 weight 1 minconn 1 maxconn 3 check inter 40000 server slow2 127.0.0.1:3004 weight 1 minconn 1 maxconn 3 check inter 40000 server slow3 127.0.0.1:3005 weight 1 minconn 1 maxconn 3 check inter 40000 and the Thin config file /etc/thin/gitwatcher.yml : --- chdir: /var/www/gitwatcher environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 100 require: [] wait: 30 servers: 6 daemonize: true if I look into open listen ports, I got the following : root@fullness:/var/www/gitwatcher# lsof | grep TCP | egrep "nginx|haproxy|thin" nginx 834 root 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 835 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 837 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) haproxy 1908 haproxy 4u IPv4 11699 0t0 TCP localhost:3100 (LISTEN) haproxy 1908 haproxy 6u IPv4 11701 0t0 TCP localhost:3200 (LISTEN) root@fullness:/var/www/gitwatcher# iptables -L get me the following : Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:22222 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Any help ?

    Read the article

  • Creating Multiple Users on Single PHP-FPM Pool

    - by Vince Kronlein
    Have PHP-FPM/FastCGI up and running on my cPanel/WHM server but I'd like have it allow for multiple users off of a single pool. Getting all vhosts to run off a single pool is simple by adding this to the Apache include editor under Global Post Vhost: <IfModule mod_fastcgi.c> FastCGIExternalServer /usr/local/sbin/php-fpm -host 127.0.0.1:9000 AddHandler php-fastcgi .php Action php-fastcgi /usr/local/sbin/php-fpm.fcgi ScriptAlias /usr/local/spin/php-fpm.fcgi /usr/local/sbin/php-fpm <Directory /usr/local/sbin> Options ExecCGI FollowSymLinks SetHandler fastcgi-script Order allow,deny Allow from all </Directory> </IfModule> But I'd like to find a way to implement php running under the user, but sharing the pool. I manage and control all the domains that run under the pool so I'm not concerned about security of files per account, I just need to make sure all scripting can be executed by the user who owns the files, instead of needing to change file permissions for each account, or having to create tons of vhost include files.

    Read the article

  • Virtual fiber channel HBA in Solaris

    - by Phil
    We are trying to set up some virtual Fibre Channel HBA's in Solaris. This seems to be possible with NPIV. Creating NPIV's in a Solaris global zone works fine, but passing that NPIV to a zone didn't work at all. We tried to pass the NPIV as following: # zonecfg -z zone1 'info' zonename: studentz1 [...] device: match: /devices/pci@0,0/pci8086,25f9@6/pci8086,350c@0,3/pci1077,140@4/fp@1,0:devctl allow-partition not specified allow-raw-io not specified Wat we want to do is, set up an environment for SAN exercises. We don't have a physical host per student, so we try to virtualise that in some way (Solaris zones or VMware). It should be possible to display the WWN of the virtual HBA and mount the storage presented by the disk subsystem. Any ideas to pass the NPIV to a solaris zone or to virtualise this with vmware?

    Read the article

  • How can I configure MediaWiki to display numbers in titles?

    - by Wikis
    We would like to display numbers in the titles in our MediaWiki wiki. Specifically: in the table of contents numbers are shown like this: 1 Title 1.1 Subtitle 2 Another title However, on the page the titles (that map to the table of contents) appear like this: Title text Subtitle text Another title text That is, no numbers are shown. How can we show numbers in the page contents? There are three possible ways that this could be answered (that I can think of). In our order of preference, they are: Setting per user Setting per page Global setting (for all users)

    Read the article

  • Emacs saying: <M-kp-7> is undefined when dictating quotes with Dragon naturally speaking 12

    - by Keks Dose
    I dictating my text via Dragon Naturally Speaking 12 into Emacs. Whenever I say (translation from German): 'open quotes', I expect something like " or » to appear on the screen, but I simply get a message <M-kp-2> is undefined . Same goes for 'close quotes', I get <M-kp-7> is undefined. Does anybody know how to define those virtual keyboard strokes? (global-set-key [M-kp-2] "»") does not work.

    Read the article

  • Email not working on testing site in Plesk before DNS switch

    - by Dilip Rajkumar
    I have to test my website before my DNS is swtiched to the new server. My New server is having Plesk. I changed my hosts file to point to the new server and I tested the site. My Site is working fine. However, When I try to register I have 2 emails sent one to the user and one to the admin. Admin email id is same as the server name for example my site name is test.com the admin email is [email protected]. So email is not sent to the admin. I know the email is not sent because the Plesk his searching its own dns instead of Global public DNS. Do any one know how to make my site see public DNS on sending email in Plesk. If I set Goole Public DNS 8.8.8.8 for MS record will it work.. Please guide me. Thanks in advance..

    Read the article

  • Custom Filter Problem?

    - by mr.lost
    greetings all,iam using spring security 3 and i want to perform some logic(saving some data in the session) when the user is visiting the site and he's remembered so i extended the GenericFilterBean class and performed the logic in the doFilter method then complete the filter chain by calling the chain.doFilter method,and then inserted that filter after the remember me filter in the security.xml file? but there's a problem is the filter is executed on each page even if the user is remembered or not is there's something wrong with the filter implementation or the position of the filter? and i have a simple question,is the filter chain by default is executed on each page? and when making a custom filter should i add it to the web.xml too? the filter class: package projects.internal; import java.io.IOException; import javax.servlet.FilterChain; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.core.Authentication; import org.springframework.security.core.context.SecurityContextHolder; import org.springframework.web.filter.GenericFilterBean; import projects.ProjectManager; public class rememberMeFilter extends GenericFilterBean { private ProjectManager projectManager; @Autowired public rememberMeFilter(ProjectManager projectManager) { this.projectManager = projectManager; } @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { System.out.println("In The Filter"); Authentication auth = (Authentication) SecurityContextHolder .getContext().getAuthentication(); HttpServletResponse response = ((HttpServletResponse) res); HttpServletRequest request = ((HttpServletRequest) req); // if the user is not remembered,do nothing if (auth == null) { chain.doFilter(request, response); } else { // the user is remembered save some data in the session System.out.println("User Is Remembered"); chain.doFilter(request, response); } } } the security.xml file: <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <global-method-security pre-post-annotations="enabled"> </global-method-security> <http use-expressions="true" > <remember-me data-source-ref="dataSource"/> <intercept-url pattern="/" access="permitAll" /> <intercept-url pattern="/images/**" filters="none" /> <intercept-url pattern="/scripts/**" filters="none" /> <intercept-url pattern="/styles/**" filters="none" /> <intercept-url pattern="/p/login" filters="none" /> <intercept-url pattern="/p/register" filters="none" /> <intercept-url pattern="/p/forgot_password" filters="none" /> <intercept-url pattern="/p/**" access="isAuthenticated()" /> <custom-filter after="REMEMBER_ME_FILTER" ref="rememberMeFilter" /> <form-login login-processing-url="/j_spring_security_check" login-page="/p/login" authentication-failure-url="/p/login?login_error=1" default-target-url="/p/dashboard" authentication-success-handler-ref="myAuthenticationHandler" always-use-default-target="false" /> <logout/> </http> <beans:bean id="myAuthenticationHandler" class="projects.internal.myAuthenticationHandler" /> <beans:bean id="rememberMeFilter" class="projects.internal.rememberMeFilter" > </beans:bean> <authentication-manager alias="authenticationManager"> <authentication-provider> <password-encoder hash="md5" /> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> </authentication-manager> </beans:beans> any help?

    Read the article

  • Detecting Request that uses invalid Encoding using Modsecurity

    - by Ali Ahmad
    I am trying write a virtual patch using modsecurity for my hosted web application using following rule i.e. <Location /index.php> SecDefaultAction phase:2,t:none,log,deny # Validate parameter names SecRule ARGS_NAMES "!^(articleid)$" \ "msg:'Unknown parameter: %{MATCHED_VAR_NAME}'" # Expecting articleid only once SecRule &ARGS:articleid "!@eq 1" \ "msg:'Parameter articleid seen more than once'" # Validate parameter articleid SecRule ARGS:articleid "!^[0-9]{1,10}$" \ "msg:'Invalid parameter articleid'" </Location> The problem is how can i reject requests that use invalid encoding as a global WAF configuration so that this patch cannot be circumvented.

    Read the article

  • Best choice for off-site backup: dd vs tar

    - by plok
    I have two 1TB single-partition hard disks configured as RAID1, of which I would like to make an off-site backup on a third disk, which I am still to buy. The idea is to store the backup at a relative's house, considerably far away from my place, in the hope that all the information will be safe in the case of a global thermonuclear apocalypse. Of course, this backup would be well encrypted. What I still have to decide is whether I am going to simply tar the entire partition or, instead, use dd to create an image of the disks. Is there any non-trivial difference between these two approaches that I could be overlooking? This off-site backup would be updated no more than two or three times a year, in the best of the cases, so performance should not be a factor to be pondered at all. What, and why, would you use if you were me? dd, tar, or a third option?

    Read the article

  • Allow certain users to access a specific directory?

    - by animuson
    I'm trying to figure out how to allow certain users who are also me to access a directory of files that I want to use for all of my users. I'm using cPanel and I used WHM to create three separate accounts. The files I want to use are on account1 in the directory /home/account1/public_html/source/engines and I want the directory /home/account2/public_html/source/engines to use the same exact files without having to upload them to both places every time I change them, so I created a simple symbolic link and added account2 to the group account1 (while still keeping its own group as the primary). It still gives me a Permission Denied error though. Is there any way I can grant account2 and other accounts that I create for myself access to those files? I don't want them to be global to all users because I don't want my hosted users to be able to access them, only my users. groups account1 returns account1 : account1 groups account2 returns account2 : account2 account1 /home/account1/public_html/source/engines and all its files belongs to account1:account1 Any other information you might need just ask.

    Read the article

  • Spring Custom Filter Problem?

    - by mr.lost
    greetings all,iam using spring security 3 and i want to perform some logic(saving some data in the session) when the user is visiting the site and he's remembered so i extended the GenericFilterBean class and performed the logic in the doFilter method then complete the filter chain by calling the chain.doFilter method,and then inserted that filter after the remember me filter in the security.xml file? but there's a problem is the filter is executed on each page even if the user is remembered or not is there's something wrong with the filter implementation or the position of the filter? and i have a simple question,is the filter chain by default is executed on each page? and when making a custom filter should i add it to the web.xml too? the filter class: package projects.internal; import java.io.IOException; import javax.servlet.FilterChain; import javax.servlet.ServletException; import javax.servlet.ServletRequest; import javax.servlet.ServletResponse; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.core.Authentication; import org.springframework.security.core.context.SecurityContextHolder; import org.springframework.web.filter.GenericFilterBean; import projects.ProjectManager; public class rememberMeFilter extends GenericFilterBean { private ProjectManager projectManager; @Autowired public rememberMeFilter(ProjectManager projectManager) { this.projectManager = projectManager; } @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException { System.out.println("In The Filter"); Authentication auth = (Authentication) SecurityContextHolder .getContext().getAuthentication(); HttpServletResponse response = ((HttpServletResponse) res); HttpServletRequest request = ((HttpServletRequest) req); // if the user is not remembered,do nothing if (auth == null) { chain.doFilter(request, response); } else { // the user is remembered save some data in the session System.out.println("User Is Remembered"); chain.doFilter(request, response); } } } the security.xml file: <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <global-method-security pre-post-annotations="enabled"> </global-method-security> <http use-expressions="true" > <remember-me data-source-ref="dataSource"/> <intercept-url pattern="/" access="permitAll" /> <intercept-url pattern="/images/**" filters="none" /> <intercept-url pattern="/scripts/**" filters="none" /> <intercept-url pattern="/styles/**" filters="none" /> <intercept-url pattern="/p/login" filters="none" /> <intercept-url pattern="/p/register" filters="none" /> <intercept-url pattern="/p/forgot_password" filters="none" /> <intercept-url pattern="/p/**" access="isAuthenticated()" /> <custom-filter after="REMEMBER_ME_FILTER" ref="rememberMeFilter" /> <form-login login-processing-url="/j_spring_security_check" login-page="/p/login" authentication-failure-url="/p/login?login_error=1" default-target-url="/p/dashboard" authentication-success-handler-ref="myAuthenticationHandler" always-use-default-target="false" /> <logout/> </http> <beans:bean id="myAuthenticationHandler" class="projects.internal.myAuthenticationHandler" /> <beans:bean id="rememberMeFilter" class="projects.internal.rememberMeFilter" > </beans:bean> <authentication-manager alias="authenticationManager"> <authentication-provider> <password-encoder hash="md5" /> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> </authentication-manager> </beans:beans> any help?

    Read the article

  • SQL Server: One 12-drive RAID-10 array or 2 arrays of 8-drives and 4-drives

    - by ben
    Setting up a box for SQL Server 2008, which would give the best performance (heavy OLTP)? The more drives in a RAID-10 array the better performance, but will losing 4 drives to dedicate them to the transaction logs give us more performance. 12-drives in RAID-10 plus one hot spare. OR 8-drives in RAID-10 for database and 4-drives RAID-10 for transaction logs plus 2 hot spares (one for each array). We have 14-drive slots to work with and it's an older PowerVault that doesn't support global hot spares.

    Read the article

  • Does IP helper forward subnet broadcasts?

    - by Eamon
    Hi, I have a device on a VLAN that uses UDP subnet broadcasts to advertise its presence to similar devices. This works fine on a single VLAN, but now I need to allow it to communicate with similar devices on a second VLAN. I thought of using the IP helper command in the router, but I am wondering if that only forwards global broadcasts (255.255.255.255)? My device sends out a subnet broadcast (e.g. 192.168.6.255) Will IP helper change the destination address to the target subnet (e.g. 192.168.7.255)? Eamon

    Read the article

  • Need help configurating my Tomcat server without any WAR files

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • Directly editing IIS 7 applicationHost.config configuration file

    - by lunadesign
    I know that IIS 7+ now uses XML config files instead of the metabase. I also know that if I edit a web.config file for a given site, IIS automagically detects the changes and implements any corresponding config changes. However, does this also apply to the server-level applicationHost.config settings file? (Its usually located in C:\windows\system32\inetsrv\config.) Specifically, is it safe to carefully edit this file instead of using IIS Manager or the appcmd command line utility? I couldn't find anything in the documentation that said it was okay or not okay to do this. I'm curious because I have to change the bindings for numerous sites from one IP to another. It would be much faster to simply do a global search and replace for the IP address in the config file instead of manually editing a few dozen sites in the GUI.

    Read the article

  • [Closed] Oracle JDBC connection with Weblogic 10 datasource mapping, giving problem java.sql.SQLExce

    - by gauravkarnatak
    Oracle JDBC connection with Weblogic 10 datasource mapping, giving problem java.sql.SQLException: Closed Connection I am using weblogic 10 JNDI datasource to create JDBC connections, below is my config <?xml version="1.0" encoding="UTF-8"?> <jdbc-data-source xmlns="http://www.bea.com/ns/weblogic/90" xmlns:sec="http://www.bea.com/ns/weblogic/90/security" xmlns:wls="http://www.bea.com/ns/weblogic/90/security/wls" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bea.com/ns/weblogic/920 http://www.bea.com/ns/weblogic/920.xsd"> <name>XL-Reference-DS</name> <jdbc-driver-params> <url>jdbc:oracle:oci:@abc.XL.COM</url> <driver-name>oracle.jdbc.driver.OracleDriver</driver-name> <properties> <property> <name>user</name> <value>DEV_260908</value> </property> <property> <name>password</name> <value>password</value> </property> <property> <name>dll</name> <value>ocijdbc10</value> </property> <property> <name>protocol</name> <value>oci</value> </property> <property> <name>oracle.jdbc.V8Compatible</name> <value>true</value> </property> <property> <name>baseDriverClass</name> <value>oracle.jdbc.driver.OracleDriver</value> </property> </properties> </jdbc-driver-params> <jdbc-connection-pool-params> <initial-capacity>1</initial-capacity> <max-capacity>100</max-capacity> <capacity-increment>1</capacity-increment> <test-connections-on-reserve>true</test-connections-on-reserve> <test-table-name>SQL SELECT 1 FROM DUAL</test-table-name> </jdbc-connection-pool-params> <jdbc-data-source-params> <jndi-name>ReferenceData</jndi-name> <global-transactions-protocol>OnePhaseCommit</global-transactions-protocol> </jdbc-data-source-params> </jdbc-data-source> When I run a bulk task where there are lots of connections made and closed, sometimes it gives connection closed exception for any of the task in the bulk task. Below is detailed exception' java.sql.SQLException: Closed Connection at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:111) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:145) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:207) at oracle.jdbc.driver.OracleStatement.ensureOpen(OracleStatement.java:3512) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3265) at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3367) Any ideas?

    Read the article

  • Websphere SSL handshake with active directory cluster

    - by ring bearer
    We have a WebSphere based application that uses Active Directory(AD) based security configurations. Under WebSphere "Global security" we have configured the active directory server and connection parameters. Active directory server is actually a cluster of four servers, say, serverdc01, serverdc02,serverdc03 and serverdc04. Each of these servers have their own root certificate with CN=serverdc01, CN=serverdc02 ..so on. So to set up SSL communication, I need to retrieve certificate of active directory and save it in WebSphere's trust store. When I retrieve certificate by putting AD server name, port and retrieve certificate I randomly get certificate of one of the serverdc01,serverdc02 ... Then I save that certificate to trust store. Question is : Do I have to save certificate from each of the serverdc01,serverdc02 ...in cluster to WebSphere's trust store? What are general strategies so that each server in the cluster does not require its own root certificate?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >