Search Results

Search found 24177 results on 968 pages for 'true'.

Page 251/968 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Apache not Forwarding Client x509 Certificate to Tomcat via mod_proxy

    - by hooknc
    Hi Everyone, I am having difficulties getting a client x509 certificate to be forwarded to Tomcat from Apache using mod_proxy. From observations and reading a few logs it does seem as though the client x509 certificate is being accepted by Apache. But, when Apache makes an SSL request to Tomcat (which has clientAuth="want"), it doesn't look like the client x509 certificate is passed during the ssl handshake. Is there a reasonable way to see what Apache is doing with the client x509 certificate during its handshake with Tomcat? Here is the environment I'm working with: Apache/2.2.3 Tomcat/6.0.29 Java/6.0_23 OpenSSL 0.9.8e Here is my Apache VirtualHost SSL config: <VirtualHost xxx.xxx.xxx.xxx:443> ServerName xxx ServerAlias xxx SSLEngine On SSLProxyEngine on ProxyRequests Off ProxyPreserveHost On ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /usr/local/certificates/xxx.crt SSLCertificateKeyFile /usr/local/certificates/xxx.key SSLCertificateChainFile /usr/local/certificates/xxx.crt SSLVerifyClient optional_no_ca SSLOptions +ExportCertData CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass / https://xxx.xxx.xxx.xxx:8443/ ProxyPassReverse / https://xxx.xxx.xxx.xxx:8443/ </VirtualHost> Then here is my Tomcat SSL Connector: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" address="xxx.xxx.xxx.xxx" maxThreads="150" scheme="https" secure="true" keystoreFile="/usr/local/certificates/xxx.jks" keypass="xxx_pwd" clientAuth="want" sslProtocol="TLSv1" proxyName="xxx.xxx.xxx.xxx" proxyPort="443" /> Could there possibly be issues with SSL Renegotiation? Could there be problems with the Truststore in our Tomcat instance? (We are using a non-standard Truststore that has partner organization CAs.) Is there better logging for what is happening internally with Apache for SSL? Like what is happening to the client cert or why it isn't forwarding the certificate when tomcats asks for one? Any reasonable assistance would be greatly appreciated. Thank you for your time.

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • IIS URL Rewrite HTTP to HTTPS with Port

    - by Andy Arismendi
    My website has two bindings: 1000 and 1443 (port 80/443 are in use by another website on the same IIS instance). Port 1000 is HTTP, port 1443 is HTTPS. What I want to do is redirect any incoming request using "htt p://server:1000" to "htt ps://server:1443". I'm playing around with IIS 7 rewrite module 2.0 but I'm banging my head against the wall. Any insight is appreciated! BTW the rewrite configuration below works great with a site that has an HTTP binding on port 80 and HTTPS binding on port 443, but it doesn't work with my ports. P.S. My URLs intentionally have spaces because the 'spam prevention mechanism' kicked in. For some reason google login doesn't work anymore so I had to create an OpenID account (No Script could be the culprit). I'm not sure how to get XML to display nicely so I added spaces after the opening brackets. < ?xml version="1.0" encoding="utf-8"? < configuration < system.webServer < rewrite < rules < rule name="HTTP to HTTPS redirect" stopProcessing="true" < match url="(.*)" / < conditions trackAllCaptures="true" < add input="{HTTPS}" pattern="off" / < /conditions < action type="Redirect" redirectType="Found" url="htt ps: // {HTTP_HOST}/{R:1}" / < /rule < /rules < /rewrite < /system.webServer < /configuration

    Read the article

  • Remote Scripted Installation of Sun/Oracle JRE

    - by chrisbunney
    I'm attempting to automate the installation of a Debian server (debian 6.0 squeeze 64bit). Part of the installation requires the Sun JRE package to be installed. This package has a licence agreement, which has to be accepted. I have a script which uses the following lines to accept and install the JRE: echo "sun-java6-bin shared/accepted-sun-dlj-v1-1 boolean true" | debconf-set-selections apt-get install -y sun-java6-jre This works fine when executing the script locally. However, I need to execute the script remotely using the ssh command, e.g.: ssh -i keyFile root@hostname './myScript' This doesn't work. In particular, it fails on apt-get install -y sun-java6-jre. It would seem that in spite of me setting the licence agreement to accepted, when run remotely in this manner it is ignored. Despite setting the value to true, I still get prompted to manually accept the agreement when I run this command: ssh -i keyFile root@hostname 'apt-get install -y sun-java6-jre' I suspect it is something to do with environment that is taken care of when running a proper terminal session, but have no idea what to try next to fix it. So, what do I have to do to get this command (and hence my deployment script) to run correctly when executing it remotely? Or is there an alternative way that allows me to install the JRE remotely by another means? Edit 0: I have compared the output of env when executed remotely via ssh and when executed via a local terminal session. The only difference between the outputs is that the local terminal session has the additional value TERM=xterm.

    Read the article

  • A network-related or instance-specific error occurred while establishing a connection to SQL Server

    - by sf
    Hi, I'm getting the following error when trying to load an Asp.NET MVC App on IIS 7 with Sql Server 2008 Express. The App uses Linq to SQL. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've done some searching and all answers point to enabling TCP connections in Sql Server Configuration which I have done to no avail. The connection string I am using is: Server=SERVERNAME\SQLEXPRESS;Database=DBName;Integrated Security=true The catch. I have another app that already could talk to the Sql Server just fine. Even before playing around with the Sql Server Configuration Settings. The other app uses the following connectionstring: Data Source=SERVERNAME\SQLEXPRESS;Initial Catalog=OtherDbName;Integrated Security=True;Persist Security Info=False;Connect Timeout=120 I've tried this connectionstring on the app that isn't working and it still doesn't work. Please help. I think i'm about to go crazy

    Read the article

  • Vboxheadless without a command prompt (VirtualBox)

    - by joe
    I'm trying to run VirtualBox VM's in the background from a service. I'm having trouble starting a process the way I desire. I'd like to start the virtualbox guest in headless mode as a separate process and show nothing as far as GUI. Here's what I've tried: From command line: start vboxheadless -s "Ubuntu Server" In C#: ProcessStartInfo info = new ProcessStartInfo { UseShellExecute = false, RedirectStandardOutput = true, ErrorDialog = false, WindowStyle = ProcessWindowStyle.Hidden, CreateNoWindow = true, FileName = "C:/program files/sun/virtualbox/vboxheadless", Arguments = "-s \"Ubuntu Server\"" }; Process p = new Process(); p.StartInfo = info; p.Start(); String output = p.StandardOutput.ReadToEnd(); //BLOCKS! (output stream isnt closed) I want to be able to get the output to know if starting the server was a success. However, it seems as though the window that's spawned never closes its output stream. It's also worth mentioning that I've tried using vboxmanage startvm "Ubuntu Server" --type=vrdp. I can determine whether the server started properly using this. But it shows a new command prompt window for the newly started VirtualBox guest.

    Read the article

  • MySQL tmpdir on /dev/shm with SELinux

    - by smorfnip
    On RHEL5, I have a small MySQL database that has to write temp files. To speed up this process, I would like to move the temporary directory to /dev/shm by putting the following line into my.cnf: tmpdir=/dev/shm/mysqltmp I can create /dev/shm/mysqltmp just fine and do chown mysql:mysql /dev/shm/mysqltmp chcon --reference /tmp/ /dev/shm/mysqltmp I've tried to make SELinux happy by applying the same settings that are in effect for /tmp/ (and /var/tmp/), which is presumably where MySQL is writing its tmp files if tmpdir is undefined. The problem is that SELinux complains about MySQL having access to that directory. I get the following in /var/log/messages: SELinux is preventing mysqld (mysqld_t) "getattr" to /dev/shm (tmpfs_t). SELinux is a hard mistress. Details: Source Context root:system_r:mysqld_t Target Context system_u:object_r:tmpfs_t Target Objects /dev/shm [ dir ] Source mysqld Source Path /usr/libexec/mysqld Port <Unknown> Host db.example.com Source RPM Packages mysql-server-5.0.77-3.el5 Target RPM Packages Policy RPM selinux-policy-2.4.6-255.el5_4.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name db.example.com Platform Linux db.example.com 2.6.18-164.2.1.el5 #1 SMP Mon Sep 21 04:37:42 EDT 2009 x86_64 x86_64 Alert Count 46 First Seen Wed Nov 4 14:23:48 2009 Last Seen Thu Nov 5 09:46:00 2009 Local ID e746d880-18f6-43c1-b522-a8c0508a1775 ls -lZ /dev/shm shows drwxrwxr-x mysql mysql system_u:object_r:tmp_t mysqltmp and permissions for /dev/shm itself are drwxrwxrwt root root system_u:object_r:tmpfs_t shm I've also tried chcon -R -t mysqld_t /dev/shm/mysqltmp and setting the group on /dev/shm to mysql with no better results. Shouldn't it be enough to tell SELinux, hey, this is a temp directory just like MySQL was using before? Short of turning off SELinux, how do I make this work? Do I need to edit SELinux policy files?

    Read the article

  • cyrus-imapd is not work with sasldb2, but postfix work

    - by Felix Chang
    centos6 64 bits: when i use pop3 for access cyrus-imapd: S: +OK li557-53 Cyrus POP3 v2.3.16-Fedora-RPM-2.3.16-6.el6_2.5 server ready <3176565056.1354071404@li557-53> C: USER [email protected] S: +OK Name is a valid mailbox C: PASS abcabc S: -ERR [AUTH] Invalid login C: QUIT and with USER "abc" failed too. my imapd.conf: configdirectory: /var/lib/imap partition-default: /var/spool/imap admins: cyrus sievedir: /var/lib/imap/sieve sendmail: /usr/sbin/sendmail hashimapspool: true sasl_pwcheck_method: auxprop sasl_mech_list: PLAIN LOGIN tls_cert_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem tls_key_file: /etc/pki/cyrus-imapd/cyrus-imapd.pem tls_ca_file: /etc/pki/tls/certs/ca-bundle.crt allowplaintext: true #defaultdomain: myabc.com loginrealms: myabc.com sasldblistuser2: [email protected]: userPassword but my postfix is ok with same user. /etc/sasl2/smtpd.conf pwcheck_method: auxprop mech_list: plain login log_level:7 saslauthd_path:/var/run/saslauthd/mux /etc/postfix/main.cf queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix mail_owner = postfix myhostname = localhost mydomain = myabc.com myorigin = $mydomain inet_interfaces = all inet_protocols = all mydestination = $myhostname, localhost.$mydomain, localhost,$mydomain local_recipient_maps = unknown_local_recipient_reject_code = 550 mynetworks_style = subnet mynetworks = 192.168.0.0/24, 127.0.0.0/8 relay_domains = $mydestination alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases home_mailbox = Maildir/ mailbox_transport = lmtp:unix:/var/lib/imap/socket/lmtp debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_security_options = noanonymous smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_security_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination message_size_limit = 15728640 broken_sasl_auth_clients=yes please help.

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • email output of powershell script

    - by Gordon Carlisle
    I found this wonderful script that outputs the status of the current DFS backlog to the powershell console. This works great, but I need the script to email me so I can schedule it to run nightly. I have tried using the Send-MailMessage command, but can't get it to work. Mainly because my powershell skills are very weak. I believe most of the issue revolve around the script using the Write-Host command. While the coloring is nice I would much rather have it email me the results. I also need the solution to be able to specify a mail server since the dfs servers don't have email capability. Any help or tips are welcome and appreciated. Here is the code. $RGroups = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query "SELECT * FROM DfsrReplicationGroupConfig" $ComputerName=$env:ComputerName $Succ=0 $Warn=0 $Err=0 foreach ($Group in $RGroups) { $RGFoldersWMIQ = "SELECT * FROM DfsrReplicatedFolderConfig WHERE ReplicationGroupGUID='" + $Group.ReplicationGroupGUID + "'" $RGFolders = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGFoldersWMIQ $RGConnectionsWMIQ = "SELECT * FROM DfsrConnectionConfig WHERE ReplicationGroupGUID='"+ $Group.ReplicationGroupGUID + "'" $RGConnections = Get-WmiObject -Namespace "root\MicrosoftDFS" -Query $RGConnectionsWMIQ foreach ($Connection in $RGConnections) { $ConnectionName = $Connection.PartnerName.Trim() if ($Connection.Enabled -eq $True) { if (((New-Object System.Net.NetworkInformation.ping).send("$ConnectionName")).Status -eq "Success") { foreach ($Folder in $RGFolders) { $RGName = $Group.ReplicationGroupName $RFName = $Folder.ReplicatedFolderName if ($Connection.Inbound -eq $True) { $SendingMember = $ConnectionName $ReceivingMember = $ComputerName $Direction="inbound" } else { $SendingMember = $ComputerName $ReceivingMember = $ConnectionName $Direction="outbound" } $BLCommand = "dfsrdiag Backlog /RGName:'" + $RGName + "' /RFName:'" + $RFName + "' /SendingMember:" + $SendingMember + " /ReceivingMember:" + $ReceivingMember $Backlog = Invoke-Expression -Command $BLCommand $BackLogFilecount = 0 foreach ($item in $Backlog) { if ($item -ilike "*Backlog File count*") { $BacklogFileCount = [int]$Item.Split(":")[1].Trim() } } if ($BacklogFileCount -eq 0) { $Color="white" $Succ=$Succ+1 } elseif ($BacklogFilecount -lt 10) { $Color="yellow" $Warn=$Warn+1 } else { $Color="red" $Err=$Err+1 } Write-Host "$BacklogFileCount files in backlog $SendingMember->$ReceivingMember for $RGName" -fore $Color } # Closing iterate through all folders } # Closing If replies to ping } # Closing If Connection enabled } # Closing iteration through all connections } # Closing iteration through all groups Write-Host "$Succ successful, $Warn warnings and $Err errors from $($Succ+$Warn+$Err) replications." Thanks, Gordon

    Read the article

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • JAWStats statspath error on windows

    - by crosenblum
    I have AWStats which works fine, and JAWStats I am trying to get working. I have tried back and forward slashes to get the program to read the dirdata files. I even moved the folder of dirdata outside of program files folder, in case it had problems with folder names with spaces in them. Here is my config file. // core config parameters $sConfigDefaultView = "thismonth.all"; $bConfigChangeSites = true; $bConfigUpdateSites = true; $sUpdateSiteFilename = "xml_update.php"; // individual site configuration // awstats092012.noname.jumpingcrab.com.txt $aConfig["site1"] = array( "statspath" => "C:\\Program Files\\AWStats\\DirData\\", "statsname" => "awstats[MM][YYYY].yourexample.com.txt", "updatepath" => "C:\\Program Files\\AWStats\\wwwroot\\cgi-bin\\awstats.pl\\", "siteurl" => "http://yourexample.com", "theme" => "default", "fadespeed" => 250, "password" => "", "includes" => "" ); Domain names changed to protect the innocent...:P Here is the error message: An error has occured: No AWStats Log Files Found JAWStats cannot find any AWStats log files in the specified directory: C:\Program Files\AWStats\DirData\ Is this the correct folder? Is your config name, site1, correct? Please refer to the installation instructions for more information.

    Read the article

  • What causes this logrotate behavior in Puppet?

    - by ujjain
    After running logrotate, Puppet starts writing it's logs into /var/log/puppet/masterhttp.log-20130616. How come it doesn't keep logging in /var/log/puppet/masterhttp.log? It seems normal behavior is renaming the original log-file and start with a clean fresh log-file to start writing in that log file, keeping the other file as a log-archive. [root@puppetmaster puppet]# ls -al total 97520 drwxr-x---. 2 puppet puppet 4096 Jun 16 03:24 . drwxr-xr-x. 12 root root 4096 Jul 1 09:11 .. -rw-r--r--. 1 puppet puppet 0 Jun 16 03:24 masterhttp.log -rw-rw----. 1 puppet puppet 99847187 Jul 1 09:19 masterhttp.log-20130616 [root@puppetmaster init.d]# cat /etc/logrotate.d/puppet /var/log/puppet/*log { missingok notifempty create 0644 puppet puppet sharedscripts postrotate pkill -USR2 -u puppet -f /usr/sbin/puppetmasterd || true [ -e /etc/init.d/puppet ] && /etc/init.d/puppet reload > /dev/null 2>&1 || true endscript } [root@puppetmaster init.d]# How can I make Puppet log to /var/log/puppet/masterhttp.log and not to /var/log/puppet/masterhttp.log-20130616? Even restarting puppet doesn't make it log into /var/log/puppet/masterhttp.log instead of /var/log/puppet/masterhttp.log-20130616.

    Read the article

  • Dual Monitor support rdp 7 to win 7 on esxi

    - by rphilli5
    I am trying to RDP from a Windows 7 Professional dual monitor physical machine to a Windows 7 Professional VM hosted on esxi 4.0. I can get the spanning option to work to both monitors, but I have tried 3 different methods of connecting but have not been able to use true multiple monitors. At different times, I tried checking the "use all monitors" option, command line mstsc /multimon and added the line use multimon:i:1 to the .rdp file. None of these worked. Any ideas? The physical machine can connect to other Windows 7 physical machines with true multi monitor access. I also have the same issue when going from a 32bit RC1 machine to a Windows 7 Professional x64, but not when going in the reverse direction. Here's the .rdp: screen mode id:i:2 use multimon:i:1 desktopwidth:i:1440 desktopheight:i:900 session bpp:i:16 winposstr:s:0,1,341,118,1139,568 compression:i:1 keyboardhook:i:2 audiocapturemode:i:0 videoplaybackmode:i:1 connection type:i:1 displayconnectionbar:i:1 disable wallpaper:i:1 allow font smoothing:i:0 allow desktop composition:i:0 disable full window drag:i:1 disable menu anims:i:1 disable themes:i:1 disable cursor setting:i:0 bitmapcachepersistenable:i:1 full address:s:192.168.1.5 audiomode:i:0 redirectprinters:i:1 redirectcomports:i:0 redirectsmartcards:i:1 redirectclipboard:i:1 redirectposdevices:i:0 redirectdirectx:i:1 autoreconnection enabled:i:1 authentication level:i:2 prompt for credentials:i:0 negotiate security layer:i:1 remoteapplicationmode:i:0 alternate shell:s: shell working directory:s: gatewayhostname:s: gatewayusagemethod:i:4 gatewaycredentialssource:i:4 gatewayprofileusagemethod:i:0 promptcredentialonce:i:1 use redirection server name:i:0 drivestoredirect:s:

    Read the article

  • Apache mod_rewrite and mod_vhost_alias Virtual Hosts and %1

    - by Matt Wall
    I have put the main bits of my httpd.conf down below. I am using %1 to get the host field so I can dynamically add vhosts by just creating dns/folders. One problem is I need to reference this: HttpStreamingLiveEventPath "D:/FMSApps/%1" HttpStreamingContentPath "D:/FMSApps/%1" In Apache when I try say to do this: http://test.domain.com/hds-vod/myfile.mp4.f4m it sees the %1 in the logs, and fails. Apache gives me this: [error] mod_jithttp [403]: No access to D:/Content/%1/DefaultContent/eve.mp4 What I'm looking for is the D:/Content/%1/DefaultContent/eve.mp4 to become D:/Content/test/DefaultContent/eve.mp4 Anyone have any useful resources / hints etc. to help me? Meanwhile my Google searching continues...! Listen 80 ServerName main1.rtmphost.com AccessFileName .htaccess ServerSignature On UseCanonicalName Off HostnameLookups Off Timeout 120 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 15 RewriteLogLevel 0 RewriteLog logs/rewrite.log DocumentRoot D:/Content LoadModule vhost_alias_module modules/mod_vhost_alias.so VirtualDocumentRoot "D:/Content/%1" RewriteEngine On <Directory /> Options None AllowOverride None Order allow,deny Allow from all Satisfy all </Directory> <IfModule f4fhttp_module> <Location /vod> HttpStreamingEnabled true HttpStreamingContentPath "D:/FMSApps/%1" Options FollowSymLinks </Location> Redirect 301 /live/events/livepkgr/events /hds-live/livepkgr <Location /hds-live> HttpStreamingEnabled true HttpStreamingLiveEventPath "D:/FMSApps/%1" HttpStreamingContentPath "D:/FMSApps/%1" HttpStreamingF4MMaxAge 2 HttpStreamingBootstrapMaxAge 2 HttpStreamingFragMaxAge -1 Options FollowSymLinks </Location> </IfModule>

    Read the article

  • schedule backup and restore of SSAS 2008 database

    - by Manjot
    Hi, I can backup and restore databases on Microsoft SQL server Analysis Service 2008 using GUI as from Backup SSAS I want to schedule backup and restore it to another server every night. so what i did is : I scripted out the backup and restore process from the GUI. Created a new SQL server agent job in database engine and added a "Run SSAS query" step. Copied the scripts to this step. But it fails. the scripts that the GUI copied out look like: <Backup xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <Object> <DatabaseID>DB</DatabaseID> </Object> <File>C:\Backup\DB.abf</File> <AllowOverwrite>true</AllowOverwrite> </Backup> <Restore xmlns="http://schemas.microsoft.com/analysisservices/2003/engine"> <File>\\server\C$\Backup\DB.abf</File> <DatabaseName>DB</DatabaseName> <AllowOverwrite>true</AllowOverwrite> </Restore> Any help please?

    Read the article

  • exim4 redirect mail sent to *@domain1.example.com to *@domain2.example.com

    - by nightcoder
    Current situation: We have a VPS that hosts a website example.org. Exim is configured to work as a smarthost. All emails sent through exim are successfully relayed to another mail server (that is working on example.com). Goal: To forward mail sent to *@example.org to *@example.com, i.e. change the recipient's address from *@example.org to *@example.com. Problem: If I send email to address *@example.org, then it seems exim doesn't change the address, it still relays the message to another mail server but recipient is still *@example.org. Maybe the redirect is not applied for some reason. Configuration and logs: /etc/exim4/update-exim4.conf.conf: dc_eximconfig_configtype='smarthost' dc_other_hostnames='' dc_local_interfaces='' dc_readhost='example.org' dc_relay_domains='example.org' dc_minimaldns='false' dc_relay_nets='0.0.0.0/32' dc_smarthost='example.com::26' CFILEMODE='644' dc_use_split_config='false' dc_hide_mailname='true' dc_mailname_in_oh='true' dc_localdelivery='maildir_home' /etc/exim4/conf.d/router/999_exim4-config_redirect (created by me): domain_redirect: debug_print = "R: forward for $local_part@$domain" driver = redirect domains = example.org data = [email protected] (for now data is set to a specific address for simplicity and testing) exim log when sending email to [email protected] (should be redirected to [email protected]): 2012-03-20 19:40:07 1SA4ud-0005Dw-7k <= [email protected] U=www-data P=local S=657 2012-03-20 19:40:08 1SA4ud-0005Dw-7k => [email protected] R=smarthost T=remote_smtp_smarthost H=domain2.com [184.172.146.66] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,2.5.4.17=#13053737303932,ST=TX,L=Houston,STREET=Suite 400,STREET=11251 Northwest Freeway,O=HostGator.com,OU=HostGator.com,OU=Comodo PremiumSSL Wildcard,CN=*.hostgator.com" 2012-03-20 19:40:08 1SA4ud-0005Dw-7k Completed So, the address is not changed :( Please help! I'm trying to make it work for half a day already :(

    Read the article

  • Configure nginx for multiple node.js apps with own domains

    - by udo
    I have a node webapp up and running with my nginx on debian squeeze. Now I want to add another one with an own domain but when I do so, only the first app is served and even if I go to the second domain I simply get redirected to the first webapp. Hope you see what I did wrong here: example1.conf: upstream example1.com { server 127.0.0.1:3000; } server { listen 80; server_name www.example1.com; rewrite ^/(.*) http://example1.com/$1 permanent; } # the nginx server instance server { listen 80; server_name example1.com; access_log /var/log/nginx/example1.com/access.log; # pass the request to the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://example1.com; proxy_redirect off; } } example2.conf: upstream example2.com { server 127.0.0.1:1111; } server { listen 80; server_name www.example2.com; rewrite ^/(.*) http://example2.com/$1 permanent; } # the nginx server instance server { listen 80; server_name example2.com; access_log /var/log/nginx/example2.com/access.log; # pass the request to the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://example2.com; proxy_redirect off; } } curl simply does this: zazzl:Desktop udo$ curl -I http://example2.com/ HTTP/1.1 301 Moved Permanently Server: nginx/1.2.2 Date: Sat, 04 Aug 2012 13:46:30 GMT Content-Type: text/html Content-Length: 184 Connection: keep-alive Location: http://example1.com/ Thanks :)

    Read the article

  • Output Caching with IIS7 - How To for an dynamic aspx page?

    - by Lieven Cardoen
    I have a RetrieveBlob.aspx that gets some query string variables and returns an asset. Eeach url corresponds to a unique asset. In the RetrieveBlob.aspx a Cache Profile is set. In Web.Config the profile looks like (under system.web tag: <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> Ok, this works fine. When I put a breakpoint in the code behind of RetrieveBlob.aspx, it gets triggered the first time, and all the other times not. Now, I throw away the Cache Profile and instead I'm having this in my Web.Config under System.WebServer: <caching> <profiles> <add extension=".swf" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".flv" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".gif" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".png" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".mp3" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpeg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> <add extension=".jpg" policy="CacheForTimePeriod" kernelCachePolicy="CacheForTimePeriod" duration="00:08:00" /> </profiles> </caching> Now the caching doesn't work anymore. What am I doing wrong? Is it possible to configure under Caching tag of System.WebServer a Caching Profile for a Dynamic aspx page?

    Read the article

  • Hudson authentication via wget is return http error 302

    - by Rafael
    I'm trying to make a script to authenticate in hudson using wget and store the authentication cookie. The contents of the script is this: wget \ --no-check-certificate \ --save-cookies /home/hudson/hudson-authentication-cookie \ --output-document "-" \ 'https://myhudsonserver:8443/hudson/j_acegi_security_check?j_username=my_username&j_password=my_password&remember_me=true' Unfortunately, when I run this script, I get: --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/j_acegi_security_check? j_username=my_username&j_password=my_password&remember_me=true Resolving myhudsonserver... 127.0.0.1 Connecting to myhudsonserver|127.0.0.1|:8443... connected. WARNING: cannot verify myhudsonserver's certificate, issued by `/C=Unknown/ST=Unknown/L=Unknown/O=Unknown/OU=Unknown/CN=myhudsonserver': Self-signed certificate encountered. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://myhudson:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F [following] --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F Reusing existing connection to myhudsonserver:8443. HTTP request sent, awaiting response... 404 Not Found 2011-02-03 13:39:29 ERROR 404: Not Found. There's no error log in any of hudson's tomcat log files. Does anyone has any idea about what might be happening? Thanks.

    Read the article

  • Mysql, SSL and java client problem

    - by CarlosH
    I'm trying to connect to an SSL-enabled mysql server from my own java application. After setting up ssl on mysqld, and successfuly tested an account using "REQUIRE ISSUER and SUBJECT", I wanted to use that account in a java app. I've generated a private key (to a file called keystore.jks) and csr using keytool, and signed the csr using my own CA(The same used with mysqld and its certificate). Once signed the csr, I've imported the CA and client cert into the keystore.jks file. When running the application the SSL connection can't be established. Relevant logs: ... [Raw read]: length = 5 0000: 16 00 00 02 FF ..... main, handling exception: javax.net.ssl.SSLException: Unsupported record version Unknown-0.0 main, SEND TLSv1 ALERT: fatal, description = unexpected_message Padded plaintext before ENCRYPTION: len = 32 0000: 02 0A BE 0F AD 64 0E 9A 32 3B FE 76 EF 40 A4 C9 .....d..2;.v.@.. 0010: B4 A7 F3 25 E7 E5 09 09 09 09 09 09 09 09 09 09 ...%............ main, WRITE: TLSv1 Alert, length = 32 [Raw write]: length = 37 0000: 15 03 01 00 20 AB 41 9E 37 F4 B8 44 A7 FD 91 B1 .... .A.7..D.... 0010: 75 5A 42 C6 70 BF D4 DC EC 83 01 0C CF 64 C7 36 uZB.p........d.6 0020: 2F 69 EC D2 7F /i... main, called closeSocket() main, called close() main, called closeInternal(true) main, called close() main, called closeInternal(true) connection error com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure Any idea why is this happening?

    Read the article

  • problem with .Net xml importnode in powershell

    - by Trondh
    Hi, Im trying to construct a powershell script that uses some XML. I have a XML document where I try to add some values with email addresses. The finished xml document should have this format: (I'm only showing the relevant part of the xml here) <emailAddresses> <value>[email protected]</value> <value>[email protected]</value> <value>[email protected]</value> </emailAddresses> SO, in powershell I try to do this as a test, which fails: $newNumber = [xml] '<value>555-1215</value>' $newNode = $Request2.ImportNode($newNumber.value, $true) $emailnode.AppendChild($newNode) After some reading, I have figured out that if I do this, it suceeds: $newNumber = [xml] '<value name="flubber">555-1215</value>' $newNode = $Request2.ImportNode($newNumber.value, $true) $emailnode.AppendChild($newNode) So, I am stuck. I'm starting to wonder if I should use another function instead of importnode when I have several keys with the same name but different values. As you guys probably have figured out by now, i'm not an expert in xml. ANy help appreciated!

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >