Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 507/618 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Issue with SSH on Ubuntu - Local connection ok, remote connection - Is it me or my ISP?

    - by Benjamin
    I have an issue with a server running Ubuntu 12.04, I am trying to set up a remote connection so I can access the server at my work from out of town. I have installed the SSH server and all that stuff, and I have reassigned the default port from 22 to 3399. A local connection from any OS can connect on the 192.168... address, but in no way can I get a connection on the actual IP address. I believe my configuration is correct, and I will attach it. If I have done something wrong in the config, please tell me and I will make a change to it. I honestly think that the Router that my ISP provided is horrible, and although the port for ssh is forwarded, it might be stopping any traffic coming inbound. Is there anything I can try to verify this? /var/log/auth does not show any error when I connect VIA our static IP. I have included all values not commented out below: (sshd_config) Port 3399 ListenAddress 0.0.0.0 Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key UsePrivilegeSeparation yes KeyRegenerationInterval 3600 ServerKeyBits 768 SyslogFacility AUTH LogLevel INFO LoginGraceTime 120 PermitRootLogin yes StrictModes yes UseDNS no RSAAuthentication yes IgnoreRhosts yes RhostsRSAAuthentication no HostbasedAuthentication no PermitEmptyPasswords no ChallengeResponseAuthentication no PasswordAuthentication yes GSSAPIAuthentication no X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Am I doing this wrong? port forwarding image

    Read the article

  • Separate Certificate by Subdomain (With multiple IPs)

    - by Brian
    Note: Yes, I realize this problem is easier to solve by just using 1 multi-domain or wildcard certificate. I wish to have an ASP.NET site running on IIS with 2 SSL domains sharing 1 web application but using separate certificates. Assuming I have 2 certificates, this can be solved on IIS7 as follows: Web Application1: Binding 1: http, 80, IP Address *, Host Name * Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Binding 3: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 ip addresses, but both mapped to the same web application. In IIS6, the closest I have been able to come to this configuration is: Web Application1: Binding 1: http, 80, IPADDRESS1 Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Web Application2: Binding 1: http, 80, IPADDRESS2 Binding 2: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 IP addresses, 2 web applications, both mapped to the same file location. The IIS6 solution is not optimal. Even if sharing an application pool, there are still costs associated with running the same site as two applications. Is upgrading from IIS6 to IIS7 a legitimate way to resolve this problem? Is there an IIS6 way to map 2 IP addresses within the same web application to different certificates?

    Read the article

  • Setup shared internet connection on virtualbox with fixed IP

    - by Tom
    I am a web developer and until recently I have been using ubuntu as my OS. For many reasons, I have switched back to windows. I still want to keep my server on linux platform, so I setup my local server as a virtual machine. Everything works great, but i have a little struggle with the networking. Since I am working in different places and going around clients, I connect to all sorts of network with different settings. That means the possible IP range is very dynamic which causes issues when I work on my local server. At the moment I have a dynamic IP on my host and static IP on my guest. That way I can access the server from my host (by adding record to hosts file). I also have internet connection on the guest. But once i change networks, it does not work (assuming the network has different configuration). My question is, how to setup host-guest networking, so no matter what network I connect to, I can keep my static IP on guest, which is registered in hosts file on my host so I can access the webserver and also I will have internet connection on the guest? Hope it make sense. Thank you

    Read the article

  • Loadbalancing with nginx and tomcat

    - by London
    Hello this should be fairly easy to answer for any system admin, the problem is that I'm not server admin but I have to complete this task, I'm very close but still not managing to do it. Here is what I mean, I have two tomcat instance running on machine1 and machine2. People usually access those by visiting urls : http://machine1:8080/appName http://machine2:9090/appName The problem is when I setup nginx with domain name i.e domain.com, nginx sends requests to http://machine1:8080/ and http://machine2:9090/ instead of http://machine1:8080/ and http://machine2:9090/appName Here is my configuration (very basic as it can be noted) : upstream backend { server machine1:8080; server machine2:9090; } server { listen 80; server_name www.mydomain.com mydomain.com; location / { # needed to forward user's IP address to rails proxy_set_header X-Real-IP $remote_addr; # needed for HTTPS proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://backend; } #end location } #end server What changes must I do to do the following : - when user visits mydomain.com - transfer him to either machine1:8080/appName or machine2:9090 Thank you

    Read the article

  • What can inexperienced admin expect after server setup completed seemingly fine? [closed]

    - by Miloshio
    Inexperienced person seems to have done everything fine so far. This is his very first time that he is the only one in charge for LAMP server. He has installed OS, network, Apache, PHP, MySQL, Proftpd, MTA & MDA software, configured VirtualHosts properly (facts because he calls himself admin), done user management and various configuration settings with respect to security recommendations and... everything is fine for now... For now. If you were directing horror movie for server admin above mentioned what would you make up for boogieman that showed up and started to pursue him? Omitting hardware disaster cases for which one cannot do anything 'from remote', what is the most common causes of server or part-of-server or server-related significant failure when managed by inexperienced admin? I have in mind something that is newbie admins very often missing which is leading to later intervention of someone with experience? May that be some uncontrolled CPU-eating leftover process, memory-related glitch, widely-used feature that messes up something unexpected on anything like that? Newbie admin for now only monitors disk-space and RAM usage, and number of running processes. He would appreciate any tips regarding what's probably going to happen to his server over time.

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration using the Ubuntu Enterprise Cloud Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....?

    Read the article

  • Backtrack, Wi-Fi not working

    - by hradecek
    I've installed Backtrack 5R3 KDE, and I realized that my wireless is not working, but wired is working fine. Here's the lshw output: *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 05 serial: 04:7d:7b:b7:46:f8 size: 100MB/s capacity: 100MB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8105e-1.fw ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100MB/s resources: irq:42 ioport:2000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:14.0 USB Controller: Intel Corporation Panther Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) 00:1c.1 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 2 (rev c4) 00:1d.0 USB Controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 05)

    Read the article

  • Have a set a cgi scripts shared by multiple domains

    - by rpat
    Goal: Have multiple domains share a set of cgi(perl) scripts Environment: Apache 2.0 on a dedicated Cent OS server. (Apache configuration files generated by cPanel) I have dozens of domains on the dedicated server. The domains set up by cPanel under VirtualHost section. I have almost no knowledge of Apache. Most of what I do is taken care of by cPanel. I would like to put a set of scripts under one directory (perhaps under / or /opt ) and for each of the domains, under the individual cgi-bin, I would like to create a symbolic link to this common directory. This way I am hoping to avoid having to keep a copy of scripts for every domain. Since Apache config files are generated by cPanel, I would not like to manually make changes to those. Beside, I could mess things up. I see that cPanel recommends use of include files rather than changing the httpd.conf Perhaps I need to have the following of symbolic links enabled in the cgi-bin directory and allow the web server user execute the scripts not owned by it. May be I am making things more complicated than they are. I would be glad to use any other means to achieve my goal. Thanks in advance for your help. *I asked this on stackoverflow and some one suggested that I could ask this on serverfault.

    Read the article

  • FreeBSD jail IMAP/MTA config recommendations

    - by kobame
    I've got access to my "own" FreeBSD jail. The jail has only basic, unconfigured system, but I have full access to FreeBSD ports, and (jail)root too. Now I need to setup my jail as IMAP/MTA. The question: What packages are EASIEST for config and later administration, (the simplest possible setup, with the minimum needed configuration) when: i haven't any preferences (don't know any yet) my (one) domain is managed by ISP, so don't need DNS need only IMAP for few users (up to 20 mailboxes) need secure transport layer (IMAPS/993) password auth, no LDAP, no kerberos, nor databases, nothing like fancy things... need easy-setup easy-admin MTA, with simplest possible password SMTP auth, (again no LDAP, nor DB), secure transport layer but would be nice have virus-scan and some anti-spam protection So, what ports I should install for MTA and IMAP? MTA (Sendmail, Postfix, Exim)? antivirus (ClamAV) antispam??? IMAP(S), (Dovecot, Courier) when the main criteria are: easy setup, and easy administration. When I googled I found only complicated setups for thousands of users with LDAP, databases and so on - too big-caliber for my small (easy?) needs. Any pointer to an easy howto is very welcomed.

    Read the article

  • Getting dwl-g122 to work on ubuntu

    - by User1
    I have a USB WiFi adapter, D-Link dwl-g122. I'm running Ubuntu 10.4. My laptop has a built-in wireless card that is connecting fine to the router. I plug in the usb and it never really connects. Here are some details: iwconfig wlan1 IEEE 802.11bg ESSID:"\x0B\xE1..." Mode:Managed Frequency:2.457 GHz Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on lshw -c network: *-network:1 description: Wireless interface physical id: 3 logical name: wlan1 serial: 00:13:46:8b:xx:xx capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=IEEE 802.11bg dmesg [ 1096.814176] wlan1: direct probe to AP xxx (try 1) [ 1096.820960] wlan1: direct probe responded [ 1096.820969] wlan1: authenticate with AP xxx (try 1) [ 1096.823790] wlan1: authenticated [ 1096.823869] wlan1: associate with AP xxx (try 1) [ 1096.827667] wlan1: RX AssocResp from xxx (capab=0x411 status=0 aid=1) [ 1096.827674] wlan1: associated [ 1142.590912] wlan1: deauthenticating from xxx by local choice (reason=3) lsmod|rt2 rt2500usb 19643 0 rt2x00usb 11260 1 rt2500usb rt2x00lib 32133 2 rt2500usb,rt2x00usb mac80211 238896 3 ath5k,rt2x00usb,rt2x00lib cfg80211 148725 4 ath5k,ath,rt2x00lib,mac80211 led_class 3764 3 ath5k,rt2x00lib,sdhci It looks like the driver loads but it doesn't feel like connecting. The behavior is identical even if I blacklist the other wifi card (using an ath5k driver). It's almost like it is using the wrong password or something. Does anyone know what is happening? Is anyone using Ubuntu successful?

    Read the article

  • Limited connections to Ubuntu 12.04 server

    - by Luis M. Valenzuela
    I'm having a weird problem with my server. The server is inside my network, connected to a 3com switch which is connected to the router that handles the internet connection. The main purpose of the server is to host a php application. What's happening is that user 1 to 15 in the private network have no problems connecting to the server, when user 16 tries to connect a time out comes out and is unable to connect to the server. It's not just to the php application, but to any service from the server. When the 15 users are using the application, the server doesn't even answer to ping. I haven't set any special limit in Apache's ini file or MySql and the firewall is being turned off because the server is only to give service to the internal network. Is there a parameter in any of the network's card conf. files that might me causing this ? Or should I suspect from the router's or switches configuration ? UPDATE. Tomorrow, I'm gonna do some test on the server modifying two kernel params in : /etc/sysctl.conf The settings are: net.core.somaxconn which has the limit on simultaneous network connections to the server and kernel.shmmax which controls the amount of memory the system can use for managing connections.

    Read the article

  • Issues connecting to HP ProCurve switches

    - by BriGuy
    We are having a very strange issue trying to connect to our infrastructure switches via SSH. When you first try connecting to them, the switches will prompt for the password - and then just sit there after it is entered. If you create a second SSH session to the switch (while letting the first one remain open and just sitting there) it will let you log right in. The switches are doing the same thing with RADIUS and local authentication. The other strange part to all of this, is that about 10 switches started doing it all at the same time. As far as actual configuration of the switches, nothing has changed. Occasionally, one switch will start working like normal, but then stop again. These are all HP ProCurve managed switches, but all different models/firmware. Some switches that are not working are using the same firmware as others that are working. UPDATE: 20130312 I am also seeing this same behavior when trying to use telnet. The first telnet session just hangs there, and the second telnet session will let me log in. Rebooting the switches seems to get them working, but I still have 5 production switches that cannot easily be rebooted because of their production roles. Is anyone aware of anything else that can be switched on/off that may reset the logon for remote management or something like that?

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    I posted this question on SuperUser but it's hardly getting any views, so I thought I'd ask here as well. In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: [\033k]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title? Thanks!

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: \[\033k\]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title?

    Read the article

  • Apache APC (Windows) Can I optimize these APC settings more?

    - by ar099968
    I would like to optimize APC some more but I am not sure where I could do something. First here is the stats after 1 week of running with the current configuration: General Cache Information APC Version 3.1.9 PHP Version 5.4.4 APC Host XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Server Software Apache Shared Memory 1 Segment(s) with 128.0 MBytes (IPC shared memory, Windows Slim RWLOCK (native) locking) Start Time 2014/06/08 05:00:00 Uptime 6 days, 11 hours and 55 minutes File Upload Support 1 Host Status Diagrams Memory Usage Free: 99.7 MBytes (77.9%) Used: 28.3 MBytes (22.1%) Hits & Misses Hits: 510818 (99.9%) Misses: 608 (0.1%) Detailed Memory Usage and Fragmentation Fragmentation: 0.60% (609.8 KBytes out of 99.7 MBytes in 83 fragments) File Cache Information Cached Files 693 ( 35.4 MBytes) Hits 5143359 Misses 1087 Request Rate (hits, misses) 13.24 cache requests/second Hit Rate 13.24 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.01 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters -/apc.php$, -/apc_clean.php$, -.tpl.cache.php$, -.tpl.php$, -.string.cache.php$, -.string.php$ apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2M apc.num_files_hint 7000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.shm_strings_buffer 4M apc.slam_defense 0 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1

    Read the article

  • Windows 2008 R2 DNS cant resolve own SOA

    - by user46742
    We have two Domain Controllers for our network. They both run DHCP, DNS, and ADS. They are both VM's sitting on MS Hyper V Server 2008 on separate physical hosts. We had our primary DC go down a week ago. I upgraded an already existing VM to Primary DC and built a new VM for the secondary. Both DNS servers are running and the SOA is configured correctly for Primary DC 1. However when I run the best practice analyzer it states the server cannot resolve it's own SOA. Check the configuration in the adapter. I checked and they are configured properly. I also went through the DNS entries thoroughly and made sure there was no records of the previous DC that went down. NSLOOKUP resolves the domain and primary dc fine. I also checked the firewalls on the machines and our physical firewall for any deny packets. Any suggestions? I appreciate any help!

    Read the article

  • Outlook 2010 Crashing Unpredictably

    - by cbkadel
    Very often when I open up Outlook 2010 and start doing actions in it, it will hang and become non responsive. I have tried letting it finish, but it never does come back (up to 20 minutes of letting it try). I generally have to restart Outlook and try again. Usually after about an hour of doing this, Outlook somehow snaps out of it and works for the rest of the day. It's generally in the morning (though I doubt that's the key variable). Generally, the emails that cause problems are HTML/formatted, but not always. What I've done so far to troubleshoot: Install Latest Outlook Hotfix (I think Dec 14, 2010) Start Outlook in Safe Mode Neither of those steps seem to make a difference. Usually - after about 10-15 restarts of Outlook on any given day, then it starts working thereafter. My next step is to uninstall/reinstall Office 2010, but I'm hoping someone has seen this and knows what to do about it - though not sure. My configuration is like this: Microsoft Online Services (using Microsoft's Sign In App) - Connecting to Exchange I have two other Exchange accounts in this profile (new feature in 2010) connected through Outlook Anywhere. Life Meeting Conferencing Add In I've disabled the People tab/add in. I've disabled the "Send to Bluetooth" add-in. Not sure what else to do?

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • Is there a way to log commands that a user runs in Windows 7?

    - by camster342
    I manage a large enterprise environment, and while we try to advise users not to, there are inevitably users that need to have local admin access to their machines. The problem is that some of these users like to "fiddle" and sometimes screw up their machines in "wonderful" ways. Is there an easy way to log what a user does on a machine, specifically in the command prompt? Maybe there is 3rd party tools I could use to log this information? With Linux that I used to use in past ages, you could look at a users bash history file to see what commands they have run. While I realise that specific log could also be altered by the user if they wanted to cover their tracks, that is the sort of log I'm looking for. If there are other ways I can also log what other system configuration type changes they make as well (not necessarily command line based), that's also useful. I know about event/system logs and so on, but that doesn't necessarily catch all the information I need to figure out how the user has buggered their machine this time.

    Read the article

  • Glassfish and SSL [closed]

    - by Richard
    I'm struggling to get SSL working on Glassfish 3.1.1. I've been following tutorials like http://javadude.wordpress.com/2010/04/06/getting-started-with-glassfish-v3-and-ssl/ and SO posts like this Issues with setting up SSL on Glassfish v3 The above links are for information only. I've summarised what I've done below. As far as I can tell I'm doing everything correctly but I'm getting this error: SSL configuration is invalid due to No available certificate or key corresponds to the SSL cipher suites which are enabled Some background of what I have done: My cert is from GoDaddy. I generated the CSR from a new keystore (keystore.jks), then imported the resulting certs back into the same keystore and set the keystore password to the same pwd as the GF master password. Then created a new SSL listener in GF and pointed it at my keystore file (which I copied into domains/domain1/config). Set the Nickname to the alias of my cert (which is something liem 'mydomain.org' i.e. the name that I get when I run keytool -list. In my ciphers section in the network listeners page, I leave the defaults in place (empty, which means all ciphers are available I think). In domain.xml I've replaced all instances of s1as to 'mydomain.org'. This is the question: What exactly is causing the error highlighted? I'm guessing it's a mismatch between my listener config and aliases in my keystore, or something similar, but I'm not really sure what. Thanks

    Read the article

  • How would I change the DocumenRoot on the version of Apache that came pre-installed on my Mac OS X s

    - by racl101
    OK, so I want to take advantage of the Apache server that comes installed on my Mac OS X system (which means, I would like not to have to install my own version of Apache since I might as well tryto use what comes bundled), and as such, I went to change some settings in the configuration file: /etc/apache2/httpd.conf Namely, I changed the these two lines: DocumentRoot "/Users/myusername/Sites" <Directory "/Users/myusername/Sites"> So that they initially pointed to a folder in my Dropbox folder (so I could have my docs sync to my Dropbox): DocumentRoot "/Users/myusername/Dropbox/public_html" <Directory "/Users/myusername/Dropbox/public_html"> That didn't work. So then I figured, ok maybe it was too much to ask to make folder in my Dropbox be my document root. So then I thought, what if I make the Document root another folder of my choosing like so: DocumentRoot "/Users/myusername/dev-sites/public_html" <Directory "/Users/myusername/dev-sites/public_html"> and that didn't work either. After looking within the httpd.conf file for clues it seems that only two directories appear to work as Document root paths for the Apache that comes bundled with Mac OS X: /Users/myusername/Sites (or ~/Sites) and /Library/WebServer/Documents/ But trying to use any other directories didn't seem to work. I would get 403 errors on my browser. I was wondering if there was some other settings to change on the httpd.conf file or any permissions to set to make this work. Any help would be appreciated and many thanks in advance.

    Read the article

  • Hybrid Exchange Online setup with on premise public folders, certificate issues?

    - by exxoid
    We have a Hybrid Exchange setup with Exchange Online (v15 tenant) and Exchange 2010 on premise. The hybrid configuration for the most part is working, what I am having an issue with is getting public folders to work for cloud users. I followed the official documentation here (http://technet.microsoft.com/en-us/library/dn249373(v=exchg.150).aspx) and it kind of works. When I am accessing Outlook on a public wifi I am able to bring up the cloud mailboxes and on premise public folders show up in Outlook. When I am accessing email via Outlook as a cloud user on the same LAN as the on premise exchange, the cloud user makes the outlook.com connection for live/ad/archive mailbox but fails to create a proxy connection for the on premise public folders. The error I get is a certificate mismatch, it seems that when a user on the LAN accesses Outlook/Exchange it is using a different certificate vs. when Outlook is launched on a WiFi network. When I look at the Outlook connection information, I see the connection to outlook.com for ad/live/archive mailbox but no entry for public folder connection. Our on premise Exchange is 2010 SP3 with latest CUs. The client is a domain joined laptop with Windows 7 and Office 2010 SP2, latest windows updates applied. Our infrastructure has a working ADFS 3 and DirSync setup for Office 365. My question then is, what do I need to do to make sure that the Cloud user launching Outlook on the LAN uses the proper certificate (the wildcard 3rd party cert.. vs. the self signed certificate which it looks like it may be using during the connection attempt).

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • Postfix: How to configure Postfix with virtual Dovecot mailboxes?

    - by user75247
    I have configured a Postfix mail server for two domains: domain1.com and domain2.com. In my configuration domain1 has both virtual users with Maildirs and aliases to forward mail to local users (eg. root, webmaster) and some small mailing lists. It also has some virtual mappings to non-local domains. Domain2 on the other hand has only virtual alias mappings, mainly to corresponding 'users' at domain1 (eg. mails to [email protected] should be forwarded to [email protected]). My problem is that currently Postfix accepts mail even for those users that don't exist in the system. Mail to existing users and /etc/aliases works fine. Postfix documentation states that the same domain should never be specified in both mydestination and virtual_mailbox_maps, but If I specify mydestination as blank then postfix validates recipients against virtual_mailbox_maps but rejects mail for local aliases of domain1.com. /etc/postfix/main.cf: myhostname = domain1.com mydomain = domain1.com mydestinations = $myhostname, localhost.$mydomain, localhost virtual_mailbox_domains = domain1.com virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_mailbox_base = /home/vmail/domains virtual_alias_domains = domain2.com virtual_alias_maps = hash:/etc/postfix/virtual alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases virtual_transport = dovecot /etc/postfix/virtual: domain1.com right-hand-content-does-not-matter firstname.lastname user1 [more aliases..] domain2.com right-hand-content-does-not-matter @domain2.com @domain1.com /etc/postfix/vmailbox: [email protected] user1/Maildir [email protected] user2/Maildir /etc/aliases: root: :include:/etc/postfix/aliases/root webmaster: :include:/etc/postfix/aliases/webmaster [etc..] Is this approach correct or is there some other way to configure Postfix with Dovecot (virtual) Maildirs and Postfix aliases?

    Read the article

  • Problem with connecting two different networks

    - by tanascius
    I have two networks: 192.168.13.0/24 (blue) and 192.168.15.0/24 (green). Computer A is connected to the 13-net, only. Computer B has two interfaces, one in each network. There is third computer that acts like a router and connects the 13-net to the 15-net (only in this direction). Now, I'd like to ping 192.168.15.100 from computer A to B. Unfortunately there is never a reply. But when I use a hub instead of a switch it works. In my opinion the ping packet travels through the switch to the router (which is the default route/gateway for A). The router sends the packet back to the switch to B. Probably B receives it on its 15-net interface but answers with it's 15th interface? Is this possible? The problem is, that B may have only a gateway 192.168.13.50 - but I am not really sure of it (B is a embedded system with limited configuration possibilities). Can anyone explain what happens here? Thank you!

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >