Search Results

Search found 41497 results on 1660 pages for 'fault'.

Page 97/1660 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Reconfigure RAID on Dell PowerEdge T710

    - by Stefano Borini
    I have a Dell PowerEdge T710 under my feet at this very moment, with RedHat Enterprise Server 5.3. I have 6 1TB disks and two 500GB. parted reports two devices, one 500 GB and the other 4 TB. So I assume the RAID has been setup as mirror for two disks, and I assume as RAID 5 the remaining ones. I say "I assume" because it does not make sense. Having 6 disks in RAID 5, I should obtain a total space of 5 TB, not 4TB. It's not even RAID 10: I would end up with a 3 TB unit. How can I check and eventually modify the RAID array definition? In the Fujitsu Siemens I played with some time ago, at boot I had the chance to enter the controller BIOS, but here I don't see a clear way to perform this operation.

    Read the article

  • Creating static NAT blocks outbound traffic Cisco ASA

    - by natediggs
    Hi Everyone, I have two web servers sitting behind a Cisco ASA 5505, which I don't have much experience with. I'm trying to create two static NATs. One static NAT that goes to xx.xx.xx.150 and another that goes to xx.xx.xx.151. I've created the static NAT for the .150 web server and it works FINE. Incoming and outgoing traffic work great. This is the staging web server. I now need to duplicate the setup for the production web server. So, I connect the webserver to the firewall, change the public IP address on one of the NICs reboot the server and I have outbound internet access. Then I run the command: static (inside,outside) xx.xx.xx.150 192.168.1.x which is successful. I then run the command: access-list acl-outside permit tcp any host xx.xx.xx.150 eq 80 Which is successful. I then try to browse the internet and I get nothing. I try to telnet in through port 80 and I get nothing (though I'm guessing because the response to the telnet request is being blocked). I've tried this with the production web server and then I tried it with another web server that is for internal testing and have the exact same problem. Both work fine until I run the static NAT rule and then no outbound internet access. I have a feeling that it's something simple that I'm missing, but my limited experience with this device is killing me. Below I've pasted the current configuration. I'm currently trying to get this to work on the .153 server which is the internal testing server. Once I can verify that works, I'll try it with production. : Saved : ASA Version 8.2(4) ! hostname QG domain-name XX.com enable password passwd names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address XX.XX.XX.148 255.255.255.0 ! interface Vlan3 shutdown no forward interface Vlan1 nameif dmz security-level 50 ip address dhcp ! boot system disk0:/asa824.bin ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns server-group DefaultDNS domain-name fw.XXgroup.com same-security-traffic permit inter-interface access-list acl-outside extended permit tcp any host XX.XX.XX.150 eq www access-list acl-outside extended permit tcp any host XX.XX.XX.150 eq https access-list acl-outside extended permit tcp any host XX.XX.XX.151 eq www access-list acl-outside extended permit tcp any host XX.XX.XX.151 eq https access-list acl-outside extended permit tcp any host XX.XX.XX.153 eq www access-list inside_access_in extended permit ip 192.168.1.0 255.255.255.0 any access-list inside_nat0_outbound extended permit ip any 192.168.1.32 255.255.255.240 pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 mtu dmz 1500 ip local pool VPNIPs 192.168.1.35-192.168.1.44 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-635.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) XX.XX.XX150 192.168.1.100 netmask 255.255.255.255 static (inside,outside) XX.XX.XX153 192.168.1.102 netmask 255.255.255.255 access-group acl-outside in interface outside route outside 0.0.0.0 0.0.0.0 XX.XX.XX129 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authorization command LOCAL http server enable http 192.168.1.0 255.255.255.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs group1 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication crack encryption 3des hash sha group 2 lifetime 86400 no crypto isakmp nat-traversal client-update enable telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd dns 208.77.88.4 interface inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn enable outside svc image disk0:/sslclient-win-1.1.0.154.pkg 1 svc image disk0:/anyconnect-win-2.5.2019-k9.pkg 2 svc enable group-policy ATSAdmin internal group-policy ATSAdmin attributes dns-server value 208.77.88.4 208.85.174.9 vpn-tunnel-protocol IPSec svc webvpn webvpn url-list none svc keep-installer installed svc rekey method ssl svc ask enable username qgadmin password /oHfeGQ/R.bd3KPR encrypted privilege 15 username benl password 0HNIGQNI0uruJvhW encrypted privilege 0 username benl attributes vpn-group-policy ATSAdmin username kuzma password rH7MM7laoynyvf9U encrypted privilege 0 username kuzma attributes vpn-group-policy ATSAdmin username nate password BXHOURyT37e4O5mt encrypted privilege 0 username nate attributes vpn-group-policy ATSAdmin tunnel-group ATSAdmin type remote-access tunnel-group ATSAdmin general-attributes address-pool VPNIPs default-group-policy ATSAdmin tunnel-group SSLVPN type remote-access tunnel-group SSLVPN general-attributes address-pool VPNIPs default-group-policy ATSAdmin ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global privilege cmd level 3 mode exec command perfmon privilege cmd level 3 mode exec command ping privilege cmd level 3 mode exec command who privilege cmd level 3 mode exec command logging privilege cmd level 3 mode exec command failover privilege show level 5 mode exec command running-config privilege show level 3 mode exec command reload privilege show level 3 mode exec command mode privilege show level 3 mode exec command firewall privilege show level 3 mode exec command interface privilege show level 3 mode exec command clock privilege show level 3 mode exec command dns-hosts privilege show level 3 mode exec command access-list privilege show level 3 mode exec command logging privilege show level 3 mode exec command ip privilege show level 3 mode exec command failover privilege show level 3 mode exec command asdm privilege show level 3 mode exec command arp privilege show level 3 mode exec command route privilege show level 3 mode exec command ospf privilege show level 3 mode exec command aaa-server privilege show level 3 mode exec command aaa privilege show level 3 mode exec command crypto privilege show level 3 mode exec command vpn-sessiondb privilege show level 3 mode exec command ssh privilege show level 3 mode exec command dhcpd privilege show level 3 mode exec command vpn privilege show level 3 mode exec command blocks privilege show level 3 mode exec command uauth privilege show level 3 mode configure command interface privilege show level 3 mode configure command clock privilege show level 3 mode configure command access-list privilege show level 3 mode configure command logging privilege show level 3 mode configure command ip privilege show level 3 mode configure command failover privilege show level 5 mode configure command asdm privilege show level 3 mode configure command arp privilege show level 3 mode configure command route privilege show level 3 mode configure command aaa-server privilege show level 3 mode configure command aaa privilege show level 3 mode configure command crypto privilege show level 3 mode configure command ssh privilege show level 3 mode configure command dhcpd privilege show level 5 mode configure command privilege privilege clear level 3 mode exec command dns-hosts privilege clear level 3 mode exec command logging privilege clear level 3 mode exec command arp privilege clear level 3 mode exec command aaa-server privilege clear level 3 mode exec command crypto privilege cmd level 3 mode configure command failover privilege clear level 3 mode configure command logging privilege clear level 3 mode configure command arp privilege clear level 3 mode configure command crypto privilege clear level 3 mode configure command aaa-server prompt hostname context call-home profile CiscoTAC-1 no active destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService destination address email [email protected] destination transport-method http subscribe-to-alert-group diagnostic subscribe-to-alert-group environment subscribe-to-alert-group inventory periodic monthly subscribe-to-alert-group configuration periodic monthly subscribe-to-alert-group telemetry periodic daily Cryptochecksum:0ed0580e151af288d865f4f3603d792a : end asdm image disk0:/asdm-635.bin no asdm history enable

    Read the article

  • ASP.NET AJAX, WebSeal Junctions, and Sessions

    - by powella
    I've run up across a problem with ASP.NET AJAX (hooked up to WebServices directly) and accessing our site through a WebSeal junction. Listing 11. On this page; http://www.ibm.com/developerworks/tivoli/library/t-ajaxtam/index.html explains that requests to pages which do not result in a content type of text/html are not sent with cookie data. Hence, no session. ASP.NET AJAX requests are returned with a content type of "application/json; charset=utf-8". As such, the WebSeal junction is not appending the Session Cookie to the request. This results in our WebService seeing the user as invalid, due to no session information. The Junction has been setup properly with the -J parameter (thats an uppercase J, which appends the required script for WebSeal to the bottom of the page - this prevents forcing IE into quirks mode.) and we've confirmed that the necessary script exists in the output source. I'm up for any suggestions at this point, as I'm out of ideas. FWIW, the site runs perfectly when not accessed through the WebSeal Junction.

    Read the article

  • ssh tunnel error "ssh_exchange_identification: Connection closed by remote host"

    - by Jacob Ewing
    I'm trying to use an ssh tunnel from my office machine to my home machine, and get an error when I try to use it. What I'm doing is starting one shell like so: ssh -gL 12345:my.home.domain:22 my.home.domain This is giving me a proper shell, no problem. What I normally do then is ssh to my home machine through this office machine, like so: ssh -p 12345 127.0.0.1 This has always worked for me, until last week, when I set up a new system on my home machine (switching from Ubuntu to Debian). Now I get an error. I can still open up my initial ssh connection, but when I try to use that tunnel, I get (on the office machine) this error: ssh_exchange_identification: Connection closed by remote host Also, when that happens, the open shell that I have the tunnelling set up through gets this line spat out at it: channel 3: open failed: connect failed: Connection timed out At which point, I'm at a loss. If any more info is needed, I'll be happy to post it. ============= further to that ============== After fiddling around further, I've found that I'm getting a different response from the server (my home machine that is) when I try to telnet in on the various ports. If I try: telnet my.home.domain 22 I get this back: Trying <my ip address>... Connected to <my domain>. Escape character is '^]'. SSH-2.0-OpenSSH_5.5p1 Debian-6+squeeze2 Which is what I would expect. After setting up the tunnel though, and then telnetting to that, I see this response: Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. ============== and further still ================== As per kbulgrien's suggestion, here is the output from the client machine with the -v option: ssh -vp 24600 127.0.0.1 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 127.0.0.1 [127.0.0.1] port 24600. debug1: Connection established. debug1: identity file /home/jacob/.ssh/id_rsa type -1 debug1: identity file /home/jacob/.ssh/id_rsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_dsa type -1 debug1: identity file /home/jacob/.ssh/id_dsa-cert type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa type -1 debug1: identity file /home/jacob/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • JBOD with PERC H810

    - by primero
    I'm wondering if anybody has ever used Dell storage products like the MD3220 array in a JBOD configuration. From what I can tell only perc h810 will work for external JBOD but that is not terribly specific, and for some reason I couldn't find many examples on the web of people configuring dell storage products as JBOD. My question is: Is it possible to connect to am MD3220 array, or other Dell arrays using a PERC h810 controller and use it as JBOD, and if so do I have to configure every disk in the array as a RAID 0 volume?

    Read the article

  • Filesystems for webserver with SATA and Solid State disk,

    - by Jorisslob
    We have just ordered a new webserver with 120 Gb solid state disk and a SATA disk. I am trying to plan ahead what sort of filesystem to use. This system will be running Linux, Apache/Tomcat to host java services. The main service is a system where people can upload reasonably large files (in the order of 100 Mb, images, image stacks and video), which people will be able to annotate and which will be sent to a database server when annotation is complete. Thus far, I plan to put most of the utility programs of the operating system om the SSD and put the large media files there. The SATA disks will hold the less volitile data like apache, tomcat and the servlets. For filesystems I have considered going for the stable EXT3 because I hear that it is best supported. The downside seems to be that it not the ideal choice for large files. That is why I am leaning towards using XFS for the SSD and EXT3 for the SATA. My questions are: 1) Does this sound like a reasonable setup? 2) What filesystems would you recommend for the SSD and for the SATA? Thanks

    Read the article

  • conversion of a VMDK image with qemu-img failed with "error while reading sector 131072: Invalid argument"

    - by Erik Sjölund
    I tried to convert a VMDK image found in a OVA file to the QCOW2 format with the qemu-img command but it failed with the error message qemu-img: error while reading sector 131072: Invalid argument user@ubuntu:/tmp$ wget ftp://ftp.sanger.ac.uk/pub/databases/Pfam/vm/PfamWeb_20120124.ova user@ubuntu:/tmp$ tar xfv PfamWeb_20120124.ova PfamWeb_20120124_2.ovf PfamWeb_20120124_2.mf PfamWeb_20120124_2-disk1.vmdk user@ubuntu:/tmp$ qemu-img convert -O qcow2 PfamWeb_20120124_2-disk1.vmdk PfamWeb_20120124_2.qcow2 qemu-img: error while reading sector 131072: Invalid argument user@ubuntu:/tmp$ qemu-img --version | grep "qemu-img version" qemu-img version 1.0, Copyright (c) 2004-2008 Fabrice Bellard user@ubuntu:/tmp$ dpkg-query -f='${Version}\n' --show qemu-utils 1.0+noroms-0ubuntu14.1 user@ubuntu:/tmp$ cat /etc/issue Ubuntu 12.04.1 LTS \n \l How do I avoid the error?

    Read the article

  • Crashing HVM domain, what do I do?

    - by rassie
    My DomUs on a Xen 3.4 on an RHEL5 are crashing when too much memory is needed: (XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! (XEN) domain_crash called from p2m.c:1091 (XEN) Domain 15 (vcpu#3) crashed on cpu#2: (XEN) ----[ Xen-3.4.0 x86_64 debug=n Not tainted ]---- (XEN) CPU: 2 (XEN) RIP: 0010:[<ffffffff80062c02>] (XEN) RFLAGS: 0000000000010216 CONTEXT: hvm guest (XEN) rax: 0000000000000000 rbx: 0000000000000001 rcx: 000000000000003f (XEN) rdx: 0000000004812000 rsi: ffff810001000000 rdi: ffff810004812000 (XEN) rbp: 0000000000000282 rsp: ffff810007635cf0 r8: ffff810037c0288e (XEN) r9: 00000000000023e1 r10: 0000000000000000 r11: 0000000000000001 (XEN) r12: ffff81000000cb00 r13: ffff8100007e43f0 r14: ffff81000000fc10 (XEN) r15: 00000000000280d2 cr0: 0000000080050033 cr4: 00000000000006e0 (XEN) cr3: 0000000006760000 cr2: 0000000003d47078 (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: 0000 cs: 0010 Can I disable populate-on-demand for HVM somehow? Xen 3.3 didn't exhibit such behaviour...

    Read the article

  • Remove Dell Openmanage from Windows 2008

    - by Erwin Blonk
    I have installed Dell Openmanage Server Agent 4.2.2 on Windows Server 2008. I need the newer version, so I need to install this version first. However, outside some registry references pointing to sources that aren't there, there is little or no trace of it being installed. For example, no trace of the program files or an entry in Prgrams And Features. Still, installing a newer version keeps coming up with an older version that needs to be removed first. When I try to install version 4.2.2 to repair and eventually remove it, it gives an error: Dell Openmanage Server Agent - Error An error was encountered while testing machine type. Failure openingen required handle to .DLL. Dell Openmanage Server Agent cannot continue the installation. Setup will exit now. I haven't found anything using different parts of the error messages as search terms.

    Read the article

  • apache renew ssl not working [on hold]

    - by Varun S
    Downloaded a new ssl cert from go daddy and installed the cert on apache2 server put the cert in /etc/ssl/certs/ folder put the gd_bundle.crt in the /etc/ssl/ folder private key is in /etc/ssl/private/private.key I just replaced the original files with the new files, did not replace the private key. I restarted the server but the website is still showing old certificated date. What am I doing wrong and how do i resolve it ? my httpd.conf file is empty, the certificated config is in the sites-enabled/default-ssl file the server is apache2 running ubuntu 14.04 os SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt SSLCertificateKeyFile /etc/ssl/private/private.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. SSLCertificateChainFile /etc/ssl/gd_bundle.crt -rwxr-xr-x 1 root root 1944 Aug 16 06:34 /etc/ssl/certs/2b1f6d308c2f9b.crt -rwxr-xr-x 1 root root 3197 Aug 16 06:10 /etc/ssl/gd_bundle.crt -rw-r--r-- 1 root root 1679 Oct 3 2013 /etc/ssl/private/private.key /etc/apache2/sites-available/default-ssl: # SSLCertificateFile directive is needed. /etc/apache2/sites-available/default-ssl: SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt /etc/apache2/sites-available/default-ssl: SSLCertificateKeyFile /etc/ssl/private/private.key /etc/apache2/sites-available/default-ssl: # Point SSLCertificateChainFile at a file containing the /etc/apache2/sites-available/default-ssl: # the referenced file can be the same as SSLCertificateFile /etc/apache2/sites-available/default-ssl: SSLCertificateChainFile /etc/ssl/gd_bundle.crt /etc/apache2/sites-enabled/default-ssl: # SSLCertificateFile directive is needed. /etc/apache2/sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt /etc/apache2/sites-enabled/default-ssl: SSLCertificateKeyFile /etc/ssl/private/private.key /etc/apache2/sites-enabled/default-ssl: # Point SSLCertificateChainFile at a file containing the /etc/apache2/sites-enabled/default-ssl: # the referenced file can be the same as SSLCertificateFile /etc/apache2/sites-enabled/default-ssl: SSLCertificateChainFile /etc/ssl/gd_bundle.crt

    Read the article

  • tracking 301 redirects with awstats

    - by ceejayoz
    I'd like to use awstats to track usage of a URL shortener I'm using. Unfortunately, awstats treats log records with a non 200 status as an error, and excludes them from statistics. Is there a way to get awstats to treat 301s as 200s for stats tracking?

    Read the article

  • Office365 DirSync Active Directory Integration

    - by dean
    I am preparing to deploy Office365 for my organization. We have an on premise Active Directory Domain Controller (Windows Server 2012 R2). We would like to leverage our Active Directory for: automatic user provisioning in Office365, and password synchronization, using the DirSync tool. Our Active Directory Domain is example.pvt. Email is currently Rackspace Exchange and email addresses follow the form [email protected]. Active Directory User Logon Name follows the form firstinitiallastname. My Questions are: What Active Directory Attribute(s) can be use in provisioning the email address in Office365? Is it possible to use the E-mail field in Active Directory to provision the email address in Office365? Will the fact that our Active Directory Domain has a different extension (.pvt vs. .com) cause a problem with our planned provisioning method?

    Read the article

  • Remote Desktop Connection - Connection Failed

    - by NLV
    Let me explain the problem. My system is connected to a network and 'was' having XP installed in it. Recently i formatted the system and installed windows server 2003 and added the machine to the network. Everything is working fine like mapping the network drives, pinging the machines etc. But i've the following problems. I'm not able to do a remote desktop connection to another system in the network. Some systems in the network is able to do a remote desktop to my machine. But not all. If i host any web service in my system i'm not able to connect it from any other machine in the network. I've already configured the Remote Desktop to accept connections. Any ideas? NLV

    Read the article

  • How to create an NFS proxy by using kernel server & client?

    - by Martin C. Martin
    I have a file server that exports as NFS. On an Ubuntu machine I mount that, then try to export it as an NFS volume. When I go to export it, I get the message: exportfs: /test/nfs-mount-point does not support NFS export How can I get this to work, or at least get more information as to what the problem is? Exact steps: Unbuntu 12.04 mount -f nfs myfileserver.com:/server-dir /test/nfs-mount-point [Works fine, I can read & write files] /etc/exports contains: /test/nfs-mount-point *(rw,no_subtree_check) sudo /etc/init.d/nfs-kernel-server restart Stopping NFS kernel daemon [ OK ] Unexporting directories for NFS kernel daemon... [ OK ] Exporting directories for NFS kernel daemon... exportfs: /test/nfs-mount-point does not support NFS export [ OK ] Starting NFS kernel daemon [ OK ]

    Read the article

  • Shorten Emacs timeout of ~/.emacs read

    - by user35042
    My ~/.emacs start-up file is stored in my AFS home directory. Often when I login to a linux machine I will forget to renew my AFS credentials before attempting to edit a local (non-AFS) file with Emacs. When this happens Emacs will attempt to load ~./emacs but cannot because it is in AFS space where I do not have access. Eventually (after a minute or so) Emacs will give up trying to load ~./emacs and continue. But waiting for Emacs to timeout is annoying (typing Ctrl-Z does not seem to interrupt this timeout). I want to shorten the amount of time that Emacs waits before giving up. I have tried the suggestion at this site which says to put the following code in the site-start.el file: (with-timeout (4) (load remote-.emacs)) However, when I do that I get the error Error in init file: Symbol's value as variable is void: remote-\.emacs whenever starting Emacs. How can I shorten this timeout?

    Read the article

  • Apache is not interpreting .PHP files

    - by Ala ABUDEEB
    I recently downloaded OpenSUSE OS version 11.4 from the site to use it as a server..In order to do that I downloaded the server edition that has Apache/2.2.17 and PHP5 downloaded by default.....Ok till now it is fine Now I started the Apache successfully and put a test.php file in the documentRoot directory. test.php contain only <?php phpinfo() ?> Then using my browser I typed http://localhost/test.php and here was the problem the browser didn't display what phpinfo() should display, instead it asked me whether I want to open or save test.php...which is driving me crazy.... I googled a lot but no solution THis is /etc/apache2/conf.d/php5.conf (IfModule mod_php5.c) AddHandler application/x-httpd-php .php4 AddHandler application/x-httpd-php .php5 AddHandler application/x-httpd-php .php AddHandler application/x-httpd-php-source .php4s AddHandler application/x-httpd-php-source .php5s AddHandler application/x-httpd-php-source .phps DirectoryIndex index.php4 DirectoryIndex index.php5 DirectoryIndex index.php (/IfModule)

    Read the article

  • Problem Activating “Windows Communication Foundation Non-HTTP Activation” feature in Windows 7

    - by Escobar5
    I'm having the following problem. I'm installing SharePoint 2010 Beta so I need to activate the windows feature "Windows Communication Foundation Non-HTTP Activation". The problem is I cannot activate it, i get the message: "An error has occurred. Not all features were successfully changed" When i look at the log (C:\Windows\Logs\CBS\CBS.log) I found this error: Process output: [l:186 [186]"SMConfigInstaller[Error]: Failed in calling 'StartService' for service 'NetTcpActivator'. Error code: 0x8007042c Anyone can give me a clue of what is happening here?

    Read the article

  • Oops, no RSA or DSA server certificate found for 'server.host.name:0'?

    - by Scott Warren
    I'm setting up a new web server that hosts a dozen virtual hosts on Ubuntu 12.4 using Apache 2.2.22 with one config file per site. I created all the configuration files all at once and ran a2ensite * to enable them all at once. When I reloaded the configuration it failed and after restarting apache I found the following error message in my error.log: Oops, no RSA or DSA server certificate found for 'server.host.name:0'?! Most of the results for this error message are years old that don't fix the problem or are bugs that have been fixed https://issues.apache.org/bugzilla/show_bug.cgi?id=31709

    Read the article

  • How do I install an rpm that complains about rpmlib(FileDigests) <= 4.6.0-1?

    - by Jake
    Im trying to install an rpm file on CentOS 5 and Im not sure how to resolve this issues it brings up: $ rpm --install epel-release-6-5.noarch.rpm warning: epel-release-6-5.noarch.rpm: Header V3 RSA/SHA256 signature: NOKEY, key ID 0608b895 error: Failed dependencies: rpmlib(FileDigests) <= 4.6.0-1 is needed by epel-release-6-5.noarch rpmlib(PayloadIsXz) <= 5.2-1 is needed by epel-release-6-5.noarch What do the lines rpmlib(FileDigests) <= 4.6.0-1 mean? is rpmlib out of date or FileDigests out of date? Whats with the syntax of something followed by parentheses? Ive tried to use yum so that it can resolve dependencies automatically but it is unable: $ sudo yum --nogpgcheck install epel-release-6-5.noarch.rpm ... Running rpm_check_debug ERROR with rpm_check_debug vs depsolve: rpmlib(FileDigests) is needed by epel-release-6-5.noarch rpmlib(PayloadIsXz) is needed by epel-release-6-5.noarch Complete! (1, [u'Please report this error in https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%205&component=yum']) On this page https://bugzilla.redhat.com/show_bug.cgi?id=665073, they say my rpm is out of date but then say I should request an rpm file that works with my version of rpm (which is 4.4.2.3) but I don't want to do that. How do I make my system compatible with this rpm file? Bonus points if you tell me how I can fix the public key error.

    Read the article

  • Using wildcard SSL certs (chain certificate) with mod_gnutls

    - by QWade
    I have a wildcard SSL certificate from GoDaddy that has three files: wildcard.crt gd_bundle.crt wildcard.key In setting up mod_gnutls to be used with Apache, I can get the site to come up, but it throws a warning that the SSL certificate has not been validated by a CA. When I use mod_ssl, I can stipulate a SSLCertificateChainFile directive and point it at the bd_bundle.crt file. I do not see how to do this with mod_gnutls. Any help is appreciated. I also know that mod_ssl supports SNI, so if there is not an easy answer, I will just try that. Thanks, QWade

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >