Search Results

Search found 12484 results on 500 pages for 'host'.

Page 411/500 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Prevent SSL certificate being returned for a specific domain

    - by jezmck
    Apologies for a long question: We've taken on a new client whose web hosting was previously on their in-house server which still has their Exchange/Outlook email. We now host their domain (and many others) on our server. They're complaining that they're getting errors in Outlook. I don't understand the AutoDiscover stuff at the root of the problem, but believe that I just need to stop the SSL certificate on our server being returned when requested at a particular domain: Yes it is, the issue lies with "{newclient}.com" being pointed to your server IP and that server has Port 443 open with an SSL certificate associated to it. So when Outlook/ActiveSync use autodiscover to find the mailbox settings it find your SSL (because 443 is open) and flags it as an error. The solution is to close 443 so its not discovered, Autodiscover will then proceed to mail.{newclient}.com via the MX / ServiceRecords and discover the correct SSL. I'm new here and there was no hand-over, so I don't know whether other currently hosted sites need to accept SSL connections, though I suspect some will, or may in future. This is a live server, so I can't risk trying loads of options in case I take the server offline! I feel like I should be adding something like the following to vhosts.conf. <VirtualHost *:443> ServerName {newclient}.com ServerAlias www.{newclient}.com SSLEngine Off SSLCertificateFile {NONE} SSLCertificateKeyFile {NONE} </VirtualHost> Apologies for the fact that I don't know enough about this subject to be able to ask the question more clearly!

    Read the article

  • correct file permissions for trac and git user to access gitolite server repos

    - by klemens
    hi, sounds like a stupid questions (to me), but i couldn't find any info. on my server i host some git repositories via gitolite, and have a trac for every repository. i have a user called git to push/pull from server (git clone git@server:repo). and trac is a apache vhost with mod_wsgi. this runs with the www-data user. so what riddles me (maybe because I have not much of a clue about file-permissions at all) is whats the best permissions setup (chown, chmod) for the git repositories (/home/git/repositories/...). www-data (or trac) needs to at least read permissions (i think). and git (or gitolite) needs obviously read/write permissions to push changesets. i tried a little bit around (i.e. adding www-data and/or git to the www-data/git group), but didn't got it right. at least one of the two don't work (git or trac). any suggestions are highly appreciated. regard, klemens

    Read the article

  • Configure Domino to use SMTP routing and hMailServer

    - by Sébastien Lachance
    I have been trying for a couple of days to set up a Domino 8.5 server. Basically, I want everything to be run inside a local network. Right now I can send email to other user in the Domino directory without any mail address. I am pretty new to all this stuff, so maybe the answer will be really obvious. What I need to do is be able to send a mail from somewhere else to a domino user that will be redirected to his account. On the Domino server, I also have hMailServer installed on port 25. I configured Domino to use port 26. I followed those step to get where I am now. -I have set the Fully qualified Internet host name to "preview.notes". -Smtp Listener task changed to Enabled to turn on the Listener so that the server can receive messages routed via SMTP routing -Setting up SMTP routing within the local Internet domain (http://www.h2l.com/help/help85%5Fadmin.nsf/f4b82fbb75e942a6852566ac0037f284/7f9738a49efc4f58852574d500097b01?OpenDocument) -I modified the person to use the [email protected] address. -I'm using the hMailServer (which have the local "preview.local" domain name) to send mail to [email protected]. When sending mail I got an error telling that the DNS is not set up correctly. Is using the Domino Smtp server instead of hMailServer will solve the problem? I can Telnet the Domino Smtp Server.

    Read the article

  • Hyper-V: Dedicated NIC for Guests VMs

    - by TheLizardKing
    I have two NIC cards and created a private virtual network for my virtual machines and unchecked "Allow management operating system to share this network adapter" which basically turns my Guest NIC into this sorta shell of a NIC card on the host machine and the only thing checked in it's properties is "Microsoft Virtual Network Switch Protocol" which I am fine with. Everything works and everything is connected. My issue is that for some reason my guest (Ubuntu Server with legacy network drivers) is not talking properly to my DHCP server. Specifically my DHCP server reserves the guest's IP address using it's MAC address but the guest isn't picking it up. It's picking up any old IP it can get and I can't even ping the hostname from another PC on the network but it pings fine if I use the IP. I see the guest showing up in my DHCP table but I can't get the reservation to stick. Is there some reason it's only partially communicating with my DHCP server? Pinging it's hostname on itself reveals it's using 127.0.0.1 instead of it's network IP. Is this an issue with the legacy drivers used in Hyper-V?

    Read the article

  • Dell OMCI: Wacky values for Temperature and etc? (Win7x64)

    - by Yablargo
    Hey All. I am running a Dell Precision R5400 Workstation with dell OMCI installed. I am using it to test polling various data over WMI for our monitoring across the enterprise. I'm getting some weird results. perhaps someone can help point me in the direction of some clarification? Posted is the results of my DCIM\SYSMAN\DCIM_NumericSensor probe for sensor type 2(temp sensor) Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. ----------------------------------- DCIM_NumericSensor instance ----------------------------------- Accuracy: AccuracyUnits: AdditionalAvailability: Availability: AvailableRequestedStates: BaseUnits: 2 Caption: CommunicationStatus: CreationClassName: DCIM_NumericSensor CurrentReading: -214748365 CurrentState: Unknown Description: DetailedStatus: DeviceID: Root/MainSystemChassis/TemperatureObj ElementName: Temperature Sensor:CPU0 EnabledDefault: 2 EnabledState: 2 EnabledThresholds: ErrorCleared: ErrorDescription: HealthState: 5 Hysteresis: IdentifyingDescriptions: InstallDate: IsLinear: LastErrorCode: LocationIndicator: LowerThresholdCritical: LowerThresholdFatal: LowerThresholdNonCritical: MaxQuiesceTime: MaxReadable: MinReadable: Name: NominalReading: NormalMax: NormalMin: OperatingStatus: OperationalStatus: 2 OtherEnabledState: OtherIdentifyingInfo: OtherSensorTypeDescription: PollingInterval: PossibleStates: Unknown,Normal,Fatal,Lower Non-Critical,Upper Non-Critical,Lower Critical,Upper Critical PowerManagementCapabilities: PowerManagementSupported: PowerOnHours: PrimaryStatus: ProgrammaticAccuracy: RateUnits: 0 RequestedState: 12 Resolution: SensorType: 2 SettableThresholds: Status: StatusDescriptions: StatusInfo: SupportedThresholds: SystemCreationClassName: DCIM_ComputerSystem SystemName: dt:5Q7BKK1 TimeOfLastStateChange: Tolerance: TotalPowerOnHours: TransitioningToState: 12 UnitModifier: 0 UpperThresholdCritical: UpperThresholdFatal: UpperThresholdNonCritical: ValueFormulation: 2 I'm not really sure whats going on, but note the CurrentReading: -214748365. I have reinstalled OMCI a few times, installed the OMCI 7x compatability and same thing I consistently get that error. It almost looks like its a issue between 32/64 bit value or something? Do I have to convert it to a float ? :)

    Read the article

  • Apache Name-based Virtual Hosts - configuring httpd.conf file

    - by Dave
    Hi there. I am running a web app on Tomcat at the following location on my server. /var/tomcat/webapps/SoccerApp I am looking to update the Tomcat httpd.conf file with the following virtual host... <VirtualHost *:80> DocumentRoot /var/tomcat/webapps/SoccerApp/MyTeam ServerName www.mysoccerapp.com </VirtualHost> This gives me a 404 error as the directory MyTeam does not exist. However my application behaves in a way that it uses this URL directory as the name of the soccer team for which to display data, so it will never be a physical folder on the server. None the less, I would like www.mysoccerapp.com to resolve to webapps/SoccerApp/MyTeam, even though the directory isnt there. does this make any sense? Any ideas on how to get this working. At the end of the day, i want to do the following... www.teamone.com -> runs /webapps/SoccerApp/TeamOne www.teamone.com -> runs /webapps/SoccerApp/TeamTwo ...where TeamOne and TeamTwo are not physical directories, but merely processed by my SoccerApp application as the current soccer team to display data for. Many many thanks! Dave.

    Read the article

  • xauth, ssh and missing home directory

    - by flolo
    We have several servers, and normaly everything works fine, except now... we get a new aircondition installed. This takes 36 hours and for this time almost all servers got shutdown, only 2 remaining servers run for the most important tasks (i.e. accepting incoming email, delivering some important websites, login-server). Everybody was informed that when they need appropiate data from the homedirs they should fetch it before take down. Long story short: Someone realized that he have run a certain program on one of the servers. No Problem, he can remote login into our login server and run the programm there without home directory (binaries are local and necessary information can be copied to the /tmp). That works like a charm until... ... the user needs to run a GUI programm. I find no easy way to make it running, usually ssh -Y honk@loginserver is enough but now the homedirectory is missing and ssh is not able to copy the cookies into ~/.Xauthority (as the file server with the home directories is down). Paranoid as all systemadmins all X-Server just listen locally not on tcp ports, so no remote X connection possible SSH config is waterproof - i.e. no way to set environment variables. My Problem is, that the generated proxy MIT cookie from ssh get lost as the .Xauthority doesnt exist. If I could retrieve it somehow I could reenter it a .Xauthority in /tmp. The only other option (besides changing the config) which came to my mind is, makeing a tunnel (netcat, or better ssh) from the remote host to the loginserver and copy the cookie manually (not sure if it the tcp-unix domain socket stuff works as expected). Any good suggestions (for the future - now our servers are already up)?

    Read the article

  • cannot reach munin port on other AWS instance

    - by Amedee Van Gasse
    2 AWS instances, in the same region but different availability zones, one is in regular EC2 and the other is in VPC, both have an Elastic IP, both are 64bit Amazon Linux AMI 2014.03.1. Both are running munin-node. The instance in the VPC is running munin-cron. I have added incoming TCP and UDP port 4949 to the security groups of both instances. On the munin node, I added an allow-line with the IP address (regular expression) of the munin server to /etc/munin/munin-node.conf. I bind munin-node to any interface using host *. Then I did sudo service munin-node restart. Then I ran netstat. $ sudo netstat -at | grep munin tcp 0 0 *:munin *:* LISTEN So the port is open there. On the munin server AND on the munin node: $ nmap AMAZON-IP -p 80,4949 | grep tcp 80/tcp open http 4949/tcp closed munin On the munin node: $ nmap localhost -p 80,4949 | grep tcp 80/tcp open http 4949/tcp open munin So from the outside, the http port is open (Apache is running) but the munin port is closed. The node can't even reach the munin port on it's own public IP address, but it can on localhost. I added port 80 as a sanity check, to be sure that there is network connectivity at all. So what am I overlooking here?

    Read the article

  • Why do HTTP loopback connections not work on my subdomains?

    - by memeLab
    I have a shared hosting account at Jumba running Linux kernel 2.6.9-103.ELsmp (don't know if that helps) with cpanel 1.0 (RC1). I am using the WordPress plugin Backup Buddy, which requires HTTP loopback connections to monitor / complete backups. This works fine on memelab.com.au, but doesn't work at any subdomain (e.g.: staging.memelab.com.au). Is it possible to setup an A record or some such to remedy this? I'm aware of a workaround, (setting WP_ALTERNATE_CRON) but I find this unsatisfactory due to the messy URLs. BackupBuddy:_Frequent_Support_Issues#HTTP_Loopback_Connections_Disabled Here is the reply from my host: …as main domain have it's own separate DNS entry it have localhost entry which helps for looback connections where as subdomains don't have separate DNS zone, so it is not possible to create looback connections for it. I have cpanel access to the 'advanced zone editor' - is there anything tricky I can do there? maybe 127.0.0.2? (I remember reading that there were at least 8 available local IPs available on (some) Linuxes.) All the A records point to the server IP, with the exception of localhost.memelab.com.au which points to 127.0.0.1. I've just tried entering a new A record: localhost.itours.memelab.com.au pointing to 127.0.0.2. I still get the warning in Backup Buddy that loopback is not active, and Cpanel won't let me enter 127.0.0.1 (guess it doesn't work like that!) nslookup itours.memelab.com.au Server: 203.88.112.33 Address: 203.88.112.33#53 Non-authoritative answer: Name: itours.memelab.com.au Address: 117.55.224.177

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • ssh from 1 ubuntu box to another ubuntu box

    - by michael
    Hi, I have 2 ubuntu boxes in a WiFi network. Below is the 'ifconfig' of my destination machine. But in my source machine, I tried 'ssh 192.168.1.2' I get connection refused. $ ifconfig eth0 Link encap:Ethernet HWaddr c8:0a:a9:4d:d6:6a UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:35 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4508 errors:0 dropped:0 overruns:0 frame:0 TX packets:4508 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:330441 (330.4 KB) TX bytes:330441 (330.4 KB) wlan0 Link encap:Ethernet HWaddr 00:23:14:32:e8:dc inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::223:14ff:fe32:e8dc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:319828 errors:0 dropped:0 overruns:0 frame:0 TX packets:618371 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:30642011 (30.6 MB) TX bytes:921522542 (921.5 MB) How to set up so that I can ssh from 1 box to another?

    Read the article

  • Selectively allow unsafe html tags in Plone

    - by dhill
    I'm searching for a way to put widgets from several services (PicasaWeb, Yahoo Pipes, Delicious bookmarks, etc.) on the community site I host on Plone (currently 3.2.1). I'm looking for a way to allow a group of users to use dangerous html tags. There are some ways I see, but I don't know how to implement those. One would be changing safe_html for the pages editors own (1). Another would be to allow those tags on some subtree (2). And yet another finding an equivalent of "static text portlet" that would display in the middle panel (3). We could then use some of the composite products (I stumbled upon Collage and CMFContentPanels), to include the unsafe content on other sites. My site has been ridden by advert bots, so I don't want to remove the filtering all together. I don't have an easy (no false positives) way of checking which users are bots, so deploying captcha now wouldn't help either. The question is: How to implement any of those solutions? (I already asked that on plone mailing list without an answer, so I thought I would give it another try here.)

    Read the article

  • Connect to WEP Wireless Network by command line on Ubuntu

    - by Tim
    Hi, I am a newbie to both network and Linux. I am now trying to connect to a WEP wireless network by command line on my Ubuntu 8.10, because the Network Manager does not support 64 bit WEP. (1) I firstly bring down the Network Manager and then try to connect to a wireless network, whose essid is candy and password is 5673212741. But it fails as shown in the following. I wonder why and how to do it correctly? $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo iwconfig wlan0 essid candy opendo iwconfig wlan0 key 18018ce78e open $ sudo iwconfig wlan0 key 5673212741 open $ sudo dhclient wlan0 There is already a pid file /var/run/dhclient.pid with pid 9971 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on Socket/fallback DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 20 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 13 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 9 No DHCPOFFERS received. No working leases in persistent database - sleeping. $ ping www.bbc.co.uk ping: unknown host www.bbc.co.uk (2) A less important question: why the scan for wireless networ does not work after I bring down the Network Manager? $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo iwlist wlan0 scan wlan0 Interface doesn't support scanning : Network is down Thanks and regards!

    Read the article

  • Postfix cannot deliver mail to Cyrus mailbox on Ubuntu 11.10 server

    - by user105804
    I have installed and configured Postfix and Cyrus IMAP server with webcyradm according to this document - http://www.delouw.ch/linux/Postfix-Cyrus-Web-cyradm-HOWTO/html/index.html . I can access webcyradm interface, I can create new domains and new users, and I can login via IMAP after creating the user account. However, Postfix fails to deliver mail to cyrus mailboxes. Mail log contains errors shown below. Installing any IMAP server other than cyrus is not an option because it is needed by the web application. Please advise me how to make Postfix deliver email to cyrus mailboxes. The solution should not necessary include web-cyradm, but there should be a web interface for managing mail domains and mailboxes as user-friendly as possible. Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: accepted connection Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: lmtp connection preauth'd as postman Dec 30 22:46:17 acer-tower postfix/cleanup[4868]: 065D5240035: message-id=<[email protected]> Dec 30 22:46:17 acer-tower cyrus/lmtpunix[4865]: verify_user(user.imap0001) failed: Mailbox does not exist Dec 30 22:46:17 acer-tower postfix/bounce[4867]: 6C6CA24185C: sender non-delivery notification: 065D5240035 Dec 30 22:46:17 acer-tower postfix/qmgr[4833]: 065D5240035: from=<>, size=3372, nrcpt=1 (queue active) Dec 30 22:46:17 acer-tower postfix/qmgr[4833]: 6C6CA24185C: removed Dec 30 22:46:17 acer-tower postfix/lmtp[4866]: 53421240372: to=<[email protected]>, orig_to=<[email protected]>, relay=home.webshop-software.ch[/tmp/lmtp], delay=165, delays=165/0.02/0.17/0.09, dsn=5.1.1, status=bounced (host home.webshop-software.ch[/tmp/lmtp] said: 550-Mailbox unknown. Either there is no mailbox associated with this 550-name or you do not have authorization to see it. 550 5.1.1 User unknown (in reply to RCPT TO command))

    Read the article

  • Is this "cache administrator" error my server's problem?

    - by Eoin
    Hey, I have a CentOS VPS running Apache with a phpBB installation. One specific user has received errors when posting a message or logging in to the forum. The following issue has arisen in parallel to installing nginx, which serves only the static files of my site. Not sure if this is only coincidence. Furthermore, my setup uses redirects (in some cases, double-redirects) to point the user to a different virtual folder. So, the forum is seen to be at /translation/ but the actual files are found in /phpbb/. I'm at a loss as to what may be the underlying issue. My server? The person's ISP? She has tested both at home and at work, with similar issues. While trying to process the request: GET /phpbb/index.php?sid=f62c927e7eb8f1d60a92dcc6fd918112 HTTP/1.1 Host: www.irishgaelictranslator.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-za Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://www.irishgaelictranslator.com/phpbb/ucp.php?mode=login Cookie: phpbb3_cipi4_u=96645; phpbb3_cipi4_k=; phpbb3_cipi4_sid=f62c927e7eb8f1d60a92dcc6fd918112; __utma=153470688.1232378553.1294664234.1294664234.1294664234.1; __utmb=153470688.9.10.1294664234; __utmc=153470688; __utmz=153470688.1294664235.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); style_cookie=null The following error was encountered: Invalid Response The HTTP Response message received from the contacted server could not be understood or was otherwise malformed. Please contact the site operator. Your cache administrator may be able to provide you with more details about the exact nature of the problem if needed.

    Read the article

  • Adding custom service to nagiosgraph

    - by ravloony
    I have successfully added nagiosgraph to our nagios installation. I also added a memory checker plugin, from here : http://blog.vergiss-blackjack.de/2010/04/nagios-plugin-to-check-memory-consumption/. However I can't seem to get the graph of this service to be output by nagiosgraph. The plugin returns a single line like this: 31% (3785 of 11903 MB) used so i added a rule like this to the map file: /output:(\d+)% \((\d+) of (\d+) MB\) used/ and push @s, ['Mem', ['Percentage', 'GUAGE', $1], ['Used', 'GUAGE', $2], ['Total', 'GUAGE', $3] ]; I have also read this : http://www.mail-archive.com/[email protected]/msg36835.html and made sure that process_performance_data=1 in the nagios conf file. So far I have no graph for the Mem service on any host, and no rrd file either. I am unsure how to proceed to get this working. The documentation is rather difficult to follow and I haven't managed yet to understand it enough to do this. Can anyone point me to a tutorial, or some documentation which explains the steps needed to get a service noticed and graphed by nagiosgraph?

    Read the article

  • Configuring SASL support in libmemcached

    - by John Keyes
    I'm trying to build libmemcached with SASL support on OS X Mountain Lion. I have built memcached (1.4.15) with SASL support: $ memcached -S -vv Initialized SASL. slab class 1: chunk size 96 perslab 10922 ... slab class 42: chunk size 1048576 perslab 1 <17 server listening (binary) <18 server listening (binary) <19 send buffer was 9216, now 3728270 <20 send buffer was 9216, now 3728270 <19 server listening (udp) <20 server listening (udp) ... I am trying to build libmemcached with SASL support too. I have tried the following: $ ./configure --prefix=/usr/local \ --with-memcached-sasl=/usr/local/bin/memcached ... $ ./configure --prefix=/usr/local \ --with-memcached-sasl="/usr/local/bin/memcached -S" ... But the resulting configuration summary is the same for both: Configuration summary for libmemcached version 1.0.11 * Installation prefix: /usr/local * System type: apple-darwin12.2.0 * Host CPU: x86_64 * C Compiler: i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C Flags: -O2 -Werror -Wall -Wextra -std=c99 -Wbad-function-cast -Wmissing-prototypes -Wnested-externs -Woverride-init * C++ Compiler: i686-apple-darwin11-llvm-g++-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C++ Flags: -O2 -Werror -Wall -Wextra -Wpragmas -D_FORTIFY_SOURCE=2 -Waddress -Wchar-subscripts -Wcomment -Wctor-dtor-privacy -Wfloat-equal -Wformat=2 -Wmissing-field-initializers -Wmissing-noreturn -Wnon-virtual-dtor -Wnormalized=id -Woverloaded-virtual -Wpointer-arith -Wredundant-decls -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wstrict-overflow=1 -Wswitch-enum -Wundef -Wunused-variable -Wwrite-strings -fwrapv -ggdb * CPP Flags: -I/usr/local/include * Assertions enabled: no * Debug enabled: no * Warnings as failure: no * SASL support: Am I doing something incorrectly? Thanks.

    Read the article

  • Linux/hostapd: AP can ping clients, clients can access internet, can't access www@wlan1 with more than 5-6 packets at once

    - by mhambra
    Please edit the title, can't make it sound better. -- OP. Hi all, I have a Wifi USB dongle in a PC, that serves as an AP for laptop. wlan1: 192.168.2.1, netmask 255.255.255.0, routed: route add -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.2.1 ping 192.168.2.2 (laptop): ping was ok for lot of packets. Now, I try to access 192.168.2.1:80/myindex.html (apache) from laptop, and can see that own 1kb test page. But, trying to access 192.168.2.1:80/my.jpg, I see the following: GET /my.jpg HTTP/1.1 200 OK <jpg header, about a kilobyte> <TCP packet retransmisson> <TCP packet retransmisson> <end of stream> It seems to be a hostapd's problem (networked stuff worked fine with Ad-Hoc), but it may be also forwarding/routing problem too. What to google for? Even more strange, SSH to that host works fine.

    Read the article

  • SSL connection hangs as client hello (curl, openssl client, apt-get, wget, everything)

    - by Niklas B
    Hi, I've run into a problem on my Debian VPS (a xen domU) regarding SSL. Namely almost all SSL connections hangs at client hello. For example: # curl -vI https://graph.facebook.com About to connect() to graph.facebook.com port 443 (#0) Trying 66.220.146.48... connected Connected to graph.facebook.com (66.220.146.48) port 443 (#0) successfully set certificate verify locations: CAfile: none CApath: /etc/ssl/certs SSLv3, TLS handshake, Client hello (1): It's the same when using the openssl client. However, some of the SSL traffic works (for example https://www.nordea.se). Server #uname -a Linux server.com 2.6.26-1-xen-amd64 #1 SMP Fri Mar 13 21:39:38 UTC 2009 x86_64 GNU/Linux It does however work on my Dom 0 (the main xen host). Apt-get I can't even run apt-get update with the debian security sources (hangs on reading headers) Open SSL At the begining I thought I had an old openssl client (0.9.8o-4) since I appeared to have a newer on the Dom 0 (0.9.8g-15+lenny8) but doing a manuanl update on the openssl deb didn't help. Open SSL Client This is the full output of when the openssl client hangs: http://pastebin.com/PAjwMap9 Closing thoughts I've Googled the crap out of this, and I'm not getting any further. I've seen problems with curl, apt-get etc. but they are all specific relating to the very application - not general for the system. Any thoughts?

    Read the article

  • PHP displays blank white page even with all error reporting enabled

    - by Andy Shinn
    I am trying to debug a broken page in a Drupal application and am having a hard time getting PHP to spit anything useful out. I have the following set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On error_log = /var/log/php/php_error.log I have a file showing me phpinfo() which confirms these variables are set correctly for the environment. I have increased memory_limit to 256M (which should be more than enough). Yet, the only indication I get is a status 500 code in the apache access log and a blank white page from PHP. The Apache virtual host has LogLevel set to debug and the error log only outputs: [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 0 to 2 : URL /index.php, referer: http://ec2-174-129-192-237.compute-1.amazonaws.com/admin/reports/updates [Sat Jun 16 20:03:03 2012] [error] [client 173.8.175.217] File does not exist: /var/www/favicon.ico [Sat Jun 16 20:03:03 2012] [debug] mod_deflate.c(615): [client 173.8.175.217] Zlib: Compressed 42 to 44 : URL /favicon.ico The PHP error log outputs nothing at all. kernel and syslog show nothing related to Apache or PHP. I have also tried installing suphp and checking its log just confirms the user is executing correctly: [Sat Jun 16 20:02:59 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 [Sat Jun 16 20:05:03 2012] [info] Executing "/var/www/index.php" as UID 1000, GID 1000 This is on Ubuntu 12.04 x86_64 with the following PHP modules: ii php5 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.3.10-1ubuntu3.1 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.3.10-1ubuntu3.1 command-line interpreter for the php5 scripting language ii php5-common 5.3.10-1ubuntu3.1 Common files for packages built from the php5 source ii php5-curl 5.3.10-1ubuntu3.1 CURL module for php5 ii php5-gd 5.3.10-1ubuntu3.1 GD module for php5 ii php5-mysql 5.3.10-1ubuntu3.1 MySQL module for php5 So, what am I missing here? Why no error reporting?

    Read the article

  • IPv6 working fine, IPv4 throws OpenSSL error

    - by jippie
    I am building a webserver ( http://blog.linformatronics.nl/ ), which functions just fine on both IPv4 and IPv6 and when using a non-SSL connection. However when I connect to it through https, IPv6 works as expected, but an IPv4 connection throws a client side error. Server side logs are empty for the IPv4/https connection. Summarized in a table: | http | https -----+-------+------------------------------------------------------- IPv4 | works | OpenSSL error, failed. No server side logging. -----+-------+------------------------------------------------------- IPv6 | works | self signed certificate warning, but works as expected Apparently the SSL tunnel isn't even set up, which accounts for the Apache logs being empty. But why does it work fine for IPv6 and fail for IPv4? My question is why is this OpenSSL error being thrown and how can I solve it? Below is some extra information about the setup. IPv6 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -6 https://blog.linformatronics.nl --2012-11-03 15:46:48-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 2001:980:1b7f:1:a00:27ff:fea6:a2e7 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|2001:980:1b7f:1:a00:27ff:fea6:a2e7|:443... connected. WARNING: cannot verify blog.linformatronics.nl's certificate, issued by `/CN=localhost': Self-signed certificate encountered. WARNING: certificate common name `localhost' doesn't match requested host name `blog.linformatronics.nl'. HTTP request sent, awaiting response... 200 OK Length: 4556 (4.4K) [text/html] Saving to: `/dev/null' 100%[=======================================================================>] 4,556 --.-K/s in 0s 2012-11-03 15:46:49 (62.5 MB/s) - `/dev/null' saved [4556/4556] IPv4 https Command used to reproduce IPv6/https behaviour: $ wget --no-check-certificate -O /dev/null -4 https://blog.linformatronics.nl --2012-11-03 15:47:28-- https://blog.linformatronics.nl/ Resolving blog.linformatronics.nl (blog.linformatronics.nl)... 82.95.251.247 Connecting to blog.linformatronics.nl (blog.linformatronics.nl)|82.95.251.247|:443... connected. OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol Unable to establish SSL connection. Notes I am on Ubuntu Server 12.04.1 LTS

    Read the article

  • Node js server not responding outside localhost centos

    - by David Martinez
    I'm running a basic express server from CentOS but for some reason it is not responding outside of localhost, I have tried everything I have found on google but nothing works so far. This is my express server: app.listen(3000,"0.0.0.0"); If I do curl http://localhost:3000/ in the server it works fine. If I curl to the ip of the server it doesn't work. I already changed my iptables num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 2 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 3 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 There is currently a apache server running on port 80 with no problems. I also tried setting a VirtualHost on apache but it didn't work either: <VirtualHost *:80> ServerName SubDOmain.MyDomain.com ProxyRequests off <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass / http://localhost:3000/ ProxyPassReverse / http://localhost:3000/ ProxyPreserveHost on </VirtualHost> There is another virtual host working fine that redirects to another DocumentRoot. I'm running Node on root for testing purpose, but the node application owner is another user. All folders have 705 and files 664 Edit: I stopped apache and run my node app on port 80 and it working fine, I could access node app from my ip and domain.

    Read the article

  • postfix not sending domain mail to mx

    - by orlandoresorts
    I'm trying to get postfix to forward email to my domain which is hosted by gmail. As I don't have any users on my server nor do I want to. Here's how I have things set up.. LEt's say you and I have a domain called mcdonalds.com the registrar has mcdonalds.com MX records pointing to gmail. (everything works for like a year) Now we set up a server to host a website. Then we create a mail account called [email protected] and send mail locally from the server using roundcube. This works. We can send mail to cnn.com we can send mail to serverfault.com we can email any/everyone. BUT we cannot send mail to our own domain mcdonalds.com So I cannot email [email protected] I cannot email [email protected] I cannot email [email protected] It gives the error: SMTP Error (450): Failed to add recipient "[email protected]" (4.1.1 : Recipient address rejected: User unknown in virtual mailbox table). I'm guessing because it is looking at the local server to find the mailbox and it doesn't exist. So how to I tell the server for any mail going to mcdonalds.com for [email protected] to send to my external mail server and NOT to lookup on the local www box we set up with zpanel. Any ideas?

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >