Search Results

Search found 13866 results on 555 pages for 'apache commons math'.

Page 533/555 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • Jersey non blocking client

    - by Pavel Bucek
    Although Jersey already have support for making asynchronous requests, it is implemented by standard blocking way - every asynchronous request is handled by one thread and that thread is released only after request is completely processed. That is OK for lots of cases, but imagine how that will work when you need to do lots of parallel requests. Of course you can limit (and its really wise thing to do, you do want control your resources) number of threads used for asynchronous requests, but you'll get another maybe not pleasant consequence - obviously processing time will incerase. There are few projects which are trying to deal with that problem, commonly named as async http clients. I didn't want to "re-implement a wheel" and I decided I'll use AHC - Async Http Client made by Jeanfrancois Arcand. There is also interesting implementation from Apache - HttpAsyncClient, but it is still in "very early stages of development" and others haven't been in similar or better shape as AHC. How this works? Non-blocking clients allow users to make same asynchronous requests as we can do with standard approach but implementation is different - threads are better utilized, they don't spend most of time in idle state. Simply described - when you make a request (send it over the network), you are waiting for reply from other side. And there comes main advantage of non-blocking approach - it uses these threads for further work, like making other requests or processing responses etc.. Idle time is minimized and your resources (threads) will be far better used. Who should consider using this? Everyone who is making lots of asynchronous requests. I haven't done proper benchmark yet, but some simple dumb tests are showing huge improvement in cases where lots of concurrent asynchronous requests are made in short period. Last but not least - this module is still experimental, so if you don't like something or if you have ideas for improvements/any feedback, feel free to comment this blog post, send mail to [email protected] or contact me personally. All feedback is greatly appreciated! maven dependency (will be present in java.net maven 2 repo by the end of the day): link: http://download.java.net/maven/2/com/sun/jersey/experimental/jersey-non-blocking-client <dependency> <groupId>com.sun.jersey.experimental</groupId> <artifactId>jersey-non-blocking-client</artifactId> <version>1.9-SNAPSHOT</version> </dependency> code snippet: ClientConfig cc = new DefaultNonBlockingClientConfig(); cc.getProperties().put(NonBlockingClientConfig.PROPERTY_THREADPOOL_SIZE, 10); // default value, feel free to change Client c = NonBlockingClient.create(cc); AsyncWebResource awr = c.asyncResource("http://oracle.com"); Future<ClientResponse> responseFuture = awr.get(ClientResponse.class); // or awr.get(new TypeListener<ClientResponse>(ClientResponse.class) { @Override public void onComplete(Future<ClientResponse> f) throws InterruptedException { ... } }); javadoc (temporary location, won't be updated): http://anise.cz/~paja/jersey-non-blocking-client/

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • SJS AS 9.1 U2 (GF v2 U2) - Patch 25 // GF v2.1 - Patch 19 // Sun GlassFish Enterprise Server v2.1.1 Patch 13

    - by arungupta
    SJS AS 9.1 U2 (GF v2 U2) patch 25 is a commercial (Restricted) patch (see Overview of GFv2) available as part of Oracle's Commercial Support for GlassFish. This release is also patch 19 of GlassFish 2.1 and patch 13 of GlassFish 2.1.1. The file-based patches were released onSep 1, 2011; package-based patches were released on Sep 13, 2011. Release Overview Description SJS AS 9.1 U2 (GFv2 U2) - Patch 25 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1 - Patch 19 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1.1 - Patch 13 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. Patch Ids This release comes in 3 different variants: Package-based patches with HADB • Solaris SPARC - [128640-27] • Solarix i586 - [128641-27] • Linux RPM - [128642-27] File-based patches with HADB • Solaris SPARC - [128643-27] • Solaris i586 - [128644-27] • Linux - [128645-27] • Windows - [128646-27] File based patches without HADB • Solaris SPARC - [128647-27] • Solaris i586 - [128648-27] • Linux - [128649-27] • Windows - [128650-27] • AIX - [137916-27] Update Date Nov 23, 2011 Comment Commercial (for-fee) release with regular bug fixes. This is patch 25 for SJS AS 9.1 U2; it is also patch 19 for GlassFish v2.1 and patch 13 for GlassFish v2.1.1. It contains the fixes from the previous patches plus fixes for 18 unique defects. Status CURRENT Bugs Fixed in this Patch: • [12823919]: RESPONSE BYTECHUNK FLUSH WILL GENERATE A MIMEHEADER WHEN SESSION REPLICATION ON • [12818767]: INTEGRATE NEW GRIZZLY 1.0.40 • [12807660]: BUILD, STAGE AND INTEGRATING HADB • [12807643]: INTEGRATE MQ 4.4 U2 P4 • [12802648]: GLASSFISH BUILD FAILED DUE TO METRO INTEGRATION • [12799002]: JNDI RESOURCE NOT ENABLED IF TARGETTING USING ADMIN GUI ON GF 2.1.1 PATCH 11 • [12794672]: ORG.APACHE.JASPER.RUNTIME.BODYCONTENTIMPL DOES NOT COMPACT CB BUFFER • [12772029]: BUG 12308270 - NEED HOTFIX FROM GF RUNNING OPENSSO • [12749346]: VERSION CHANGES FOR GLASSFISH V2.1.1 PATCH 13 • [12749151]: INTEGRATING METRO 1.6.1-B01 INTO GF 2.1.1 P13 • [12719221]: PORTUNIFICATION WSTCPPROTOCOLFINDER.FIND NULLPOINTEREXCEPTION THROWN • [12695620]: HADB: LOGBUFFERSIZE CALCULATED INCORRECTLY FOR VALUES 120 MB AND THE MEMORY FO • [12687345]: ENVIRONMENT VARIABLE PARSING FOR SUN_APPSVR_NOBACKUP CAN FAIL DEPENDING ENV VARS • [12547651]: GLASSFISH DISPLAY BUG • [12359965]: GEREQUESTURI RETURNS URI WITH NULL PREPENDED INTERMITTENT AFTER UPGRADE • [12308270]: SUNBT7020210 ENHANCE JAXRPC SOAP RESPONSE USE PREVIOUS CONFIGURED NAMESPACE PREF • [12308003]: SUNBT7018895 FAILURE TO DEPLOY OR RUN WEBSERVICE AFTER UPDATING TO GF 2.1.1 P07 • [12246256]: SUNBT6739013 [RN]GLASSFISH/SUN APPLICATION INSTALLER CRASHES ON LINUX Additional Notes: More details about these bugs can be found at My Oracle Support.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Type of computer for a developer on the road

    - by nabucosound
    Hi developers: I am planning to be traveling through eurasia and asia (russia, china, korea, japan, south east asia...) for a while and, although there are plenty of marvelous things to see and to do, I must keep on working :(. I am a python developer, dedicated mainly to web projects. I use django, sqlite3, browsers, and ocassionaly (only if I have no choice) I install postgres, mysql, apache or any other servers commonly used in the internets. I do my coding on vim, use ssh to connect, lftp to transfer files, IRC, grep/ack... So I spend most of my time in the terminal shells. But I also use IM or Skype to communicate with my clients and peers, as well as some other software (that after all is not mandatory for my day-to-day work). I currently work with a Macbook Pro (3 years old now) and so far I am very happy with the performance. But I don't want to carry it if I am going to be "on transit" for long time, it is simply huge and heavy for what I am planning to load in my rather small backpack (while traveling, less is more, you know). So here I am reading all kind of opinions about netbooks, because at first sight this is the kind of computer I thought I had to choose. I am going to use Linux for it, Microsoft is not my cup of tea and Mac is not available for them, unless I were to buy a Macbook air, something that I won't do because if I am robbed or rain/dust/truck loaders break it I would burst in tears. I am concerned about wifi performance and connectivity, I am going to use one of those linux distros/tools to hack/test on "open" networks (if you know what I mean) in case I am not in a place with real free wifi access and I find myself in an emergency. CPU speed should be acceptable, but since I don't plan to run Photoshop or expensive IDEs, I guess most of the time I won't be overloading the machine. Apart from this, maybe (surely) I am missing other features to consider. With that said (sorry about the length) here it comes my question, raised from a deep ignorance regarding the wars betweeb betbooks vs notebooks (I assume tablet PCs are not for programming yet): If I buy a netbook will I have to throw it away after 1 month on the road and buy a notebook? Or will I be OK? Thanks! Hector Update I have received great feedback so far! I would like to insist on the fact that I will be traveling through many different countries and scenarios. I am sure that while in Japan I will be more than fine with anything related to technology, connectivity, etc. But consider that I will be, for example, on a train through Russia (transsiberian) and will cross Mongolia as well. I will stay in friends' places sometimes, but most of the time I will have to work from hostel rooms, trains, buses, beaches (hey this last one doesn't sound too bad hehe!). I think some of your answers guys seem to focus on the geek part but loose the point of this "on the road" fact. I am very aware and agree that netbooks suck compared to notebooks, but what I am trying to do here is to find a balance and discover your experiences with netbooks to see first hand if a netbook will be a fail in the mid-long term of the trip for my purposes. So I have resumed the main concepts expressed here on this small list, in no particular order: keyboard/touchpad feel: I use vim so no need of moving mouse pointers that much, unless I am browsing the web, but intensive use of keyboard screen real state: again, terminal work for most of the time battery life: I think something very important weight/size: also very important looks not worth stealing it, don't give a shit if it is lost/stolen/broken: this may depend on kind of person, your economy, etc. Also to prevent losing work, I will upload EVERYTHING to the cloud whenever I'll have a chance. wifi: don't want to discover my wifi is one of those that cannot deal with half the routers on this planet or has poor connectivity. Thanks again for your answers and comments!

    Read the article

  • How should I define Pom.xml in each Module so that web module can communicate with the other two ejb modules?

    - by Kayser
    Maven, maven, maven. It must be very nice and it is nice by a small application. Now I want to build an ear project: with two EJB Modules, a web Module and ear module to build an ear file. Web Module is dependent on the other ejb modules.. How should I define Pom.xml in each Module so that web module can communicate with the other two ejb modules in ear and the ear module builds the right ear file? What I have done before: Module 1 -- Basic Module. All other modules are dependent to this Module. Basic functionality like login etc. <packaging>ejb</packaging> Module 1 -- Data Module. All Entites are here Type EJB <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency Module 2 -- Business Module. Businnes Facades are here. Type EJB <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency Web Module - Type is WAR <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency EAR Module -- In this project I try to build the project. <packaging>ear</packaging> <dependencies> <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Business</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_WEB</artifactId> <version>0.0.1-SNAPSHOT</version> <type>war</type> </dependency </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> </plugin> </plugins> </build>

    Read the article

  • SSI: Failed String Comparison with CGI Environment Variable [migrated]

    - by Calyo Delphi
    I am currently working on developing a personal website. It's not my first time doing this, but this is my first major foray into implementing SSI. I've run myself into a wall, however, with an if-else directive that uses one of the CGI environment variables as part of its comparison. Even after some limited attempts at debugging, all of the output and documentation that I have means that the comparisons being made should fail outright. This is not the case, and the wrong evaluation is being made by the if-else directive. Here's the code in the file index.shtml: <head> <!--#set var="page" value="Home" --> <!--#include file="headlinks.shtml" --> <style> img#ref { float: right; margin-left: 8px; border-width: 0px; } </style> </head> Here's the code in the file headlinks.shtml: <title><!--#echo var="page" --> &ndash; <!--#echo var="HTTP_HOST" --></title> <!--#set var="docroot" value="${DOCUMENT_ROOT}" --> <!--#echo var="docroot" --> <!--#if expr="( $docroot != '/Applications/MAMP/htdocs' ) || ( $docroot != '/home/dragarch/public_html' )" --> <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> <!--#else --> <link rel="stylesheet" type="text/css" href="style.css"> <link rel="shortcut icon" type="image/svg+xml" href="favicon.svg" /> <!--#endif --> And here's the output for the file index.shtml: <title>Home &ndash; dragarch</title> /Applications/MAMP/htdocs <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> Both style.css and favicon.svg are in the document root with index.shtml, so the if directive should fail and default to the output of the else directive. As you can see, while the document root (which is currently the MAMP htdocs folder on my own notebook) is correct according to the output of the echo directive, the comparison in the if-else directive fails to compare the strings properly. I'm using this page for my documentation: http://httpd.apache.org/docs/2.2/mod/mod_include.html I'm at a complete loss as to why this is the case, and need a bit of help here. EDIT: I should note that dragarch is a hostname that I configured in /etc/hosts to point to 127.0.0.1 so I could test the site without having to use localhost. It has no real effect on the functionality of anything, other than to just act as a prettier hostname to use.

    Read the article

  • Apache2 configuration error: "<VirtualHost> was not closed" error

    - by Chris
    So I've already checked through my config file and I really can't see an instance where any tag hasn't been properly closed...but I keep getting this configuration error...Would you mind taking a look through the error and the config file below? Any assistance would be greatly appreciated. FYI, I've already googled the life out of the error and looked through the log extensively, I really can't find anything. Error: apache2: Syntax error on line 236 of /etc/apache2/apache2.conf: syntax error on line 1 of /etc/apache2/sites-enabled/000-default: /etc/apache2/sites-enabled/000-default:1: was not closed. Line 236 of apache2.conf: # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ Contents of 000-default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:443> SetEnvIf Request_URI "^/u" dontlog ErrorLog /var/log/apache2/error.log Loglevel warn SSLEngine On SSLCertificateFile /etc/apache2/ssl/apache.pem ProxyRequests Off <Proxy *> AuthUserFile /srv/ajaxterm/.htpasswd AuthName EnterPassword AuthType Basic require valid-user Order Deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8022/ ProxyPassReverse / http://localhost:8022/ </VirtualHost> UPDATE I had a load of other issues with my install so I wound up just wiping it and reinstalling. If I run into the same problem, I'll repost. Everyone, thanks for your help/suggestions.

    Read the article

  • ModSecurity compile error on nginx

    - by user146481
    I'm trying to install ModSecurity on nginx with the following instructions : wget https://github.com/SpiderLabs/ModSecurity/archive/master.zip unzip master cd ModSecurity-master ./autogen.sh ./configure --enable-standalone-module And i got the following error : Checking plataform... Identified as Linux configure: looking for Apache module support via DSO through APXS configure: error: couldn't find APXS After installing httpd-devel httpd-devel and running ./configure --enable-standalone-module --with-apxs=/usr/sbin/apxs ; make modsecurity compile workes but still have another error of nginx compilation : ./configure --add-module=/usr/local/src/john/ModSecurity-master/nginx/modsecurity and i got this error : gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/include/apache2 -I /usr/include/apr-1.0 -I /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone -I /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2 -I /usr/include/libxml2 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/modsecurity/ngx_http_modsecurity.o \ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:20:23: error: http_core.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:21:26: error: http_request.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:37, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_logging.h:41:23: error: apr_pools.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:38, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:26:25: error: apr_general.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:27:24: error: apr_tables.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:38, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:44: error: expected specifier-qualifier-list before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:65: error: expected specifier-qualifier-list before ‘apr_array_header_t’ cc1: warnings being treated as errors /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:135: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:135: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:135: error: expected ‘,’ or ‘;’ before ‘multipart_cleanup’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_multipart.h:137: error: expected declaration specifiers or ‘...’ before ‘apr_table_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:39, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_pcre.h:41: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_pcre.h:45: error: expected ‘)’ before ‘*’ token In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:19:27: error: apr_file_info.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:41, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:21: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:21: error: type defaults to ‘int’ in declaration of ‘apr_table_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:21: error: expected ‘,’ or ‘;’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/persist_dbm.h:24: error: expected declaration specifiers or ‘...’ before ‘apr_table_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:20:19: error: httpd.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:21:24: error: ap_release.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:24:26: error: apr_optional.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: expected declaration specifiers or ‘...’ before ‘modsec_register_tfn’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:30: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: expected declaration specifiers or ‘...’ before ‘modsec_register_operator’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:31: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:32: error: expected declaration specifiers or ‘...’ before ‘modsec_register_variable’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:33: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:32: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:36: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: expected declaration specifiers or ‘...’ before ‘modsec_register_reqbody_processor’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: expected declaration specifiers or ‘...’ before ‘(’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:37: error: type defaults to ‘int’ in declaration of ‘APR_DECLARE_OPTIONAL_FN’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:56: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:58: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:65: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:65: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:65: error: expected ‘,’ or ‘;’ before ‘input_filter’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:68: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:68: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:68: error: expected ‘,’ or ‘;’ before ‘output_filter’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:70: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:70: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:70: error: expected ‘,’ or ‘;’ before ‘read_request_body’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:77: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:77: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:77: error: expected ‘,’ or ‘;’ before ‘send_error_bucket’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:83: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:85: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:93: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/apache2.h:95: error: expected ‘)’ before ‘*’ token In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:43:25: error: http_config.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:29, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:59: error: expected declaration specifiers or ‘...’ before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:61: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:61: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:61: error: expected ‘,’ or ‘;’ before ‘collection_original_setvar’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:63: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:67: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:70: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:75: error: expected declaration specifiers or ‘...’ before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:76: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:86: error: expected specifier-qualifier-list before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:94: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:101: error: expected specifier-qualifier-list before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:111: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:111: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:111: error: expected ‘,’ or ‘;’ before ‘msre_ruleset_process_phase’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: error: expected ‘,’ or ‘;’ before ‘msre_ruleset_process_phase_internal’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:115: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:143: error: expected specifier-qualifier-list before ‘apr_ipsubnet_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:149: error: expected specifier-qualifier-list before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:189: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:219: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:235: error: expected specifier-qualifier-list before ‘fn_tfn_execute_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:239: error: expected declaration specifiers or ‘...’ before ‘fn_tfn_execute_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:258: error: expected declaration specifiers or ‘...’ before ‘apr_table_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:258: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:285: error: expected specifier-qualifier-list before ‘apr_table_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: expected declaration specifiers or ‘...’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: type defaults to ‘int’ in declaration of ‘apr_status_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: ‘apr_status_t’ declared as function returning a function /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:341: error: ‘apr_status_t’ redeclared as different kind of symbol /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:113: note: previous declaration of ‘apr_status_t’ was here /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:342: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:342: error: ‘fn_action_execute_t’ declared as function returning a function /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:369: error: expected specifier-qualifier-list before ‘fn_action_init_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:399: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:403: error: expected declaration specifiers or ‘...’ before ‘apr_array_header_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:403: error: ‘msre_parse_vars’ declared as function returning a function /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/re.h:415: error: expected specifier-qualifier-list before ‘apr_size_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:40, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:54: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:62: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:66: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:68: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:70: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:74: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:76: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:82: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:88: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:90: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:92: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:100: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:102: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:104: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:106: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:108: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:110: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:112: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:114: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:128: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:132: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:136: error: expected ‘)’ before ‘*’ token /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:140: error: data definition has no type or storage class /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:140: error: type defaults to ‘int’ in declaration of ‘apr_fileperms_t’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:140: error: expected ‘,’ or ‘;’ before ‘mode2fileperms’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_util.h:144: error: expected declaration specifiers or ‘...’ before ‘apr_pool_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:41, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_xml.h:43: error: ‘xml_cleanup’ declared as function returning a function In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_geo.h:38:25: error: apr_file_io.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:42, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_geo.h:58: error: expected specifier-qualifier-list before ‘apr_file_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:43, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_gsb.h:22:22: error: apr_hash.h: No such file or directory In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:43, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_gsb.h:25: error: expected specifier-qualifier-list before ‘apr_file_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:44, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_unicode.h:25: error: expected specifier-qualifier-list before ‘apr_file_t’ In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:46, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/msc_crypt.h:34: error: expected ‘)’ before ‘*’ token In file included from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../standalone/api.h:23, from /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:28: /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:48:23: error: ap_config.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:49:21: error: apr_md5.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:50:25: error: apr_strings.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/../../apache2/modsecurity.h:54:22: error: http_log.h: No such file or directory /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:938: error: ‘ngx_http_modsecurity_ctx_t’ has no member named ‘req’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:938: error: too many arguments to function ‘ConvertNgxStringToUTF8’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:942: error: ‘ngx_http_modsecurity_ctx_t’ has no member named ‘req’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:944: error: ‘ngx_http_modsecurity_ctx_t’ has no member named ‘req’ /usr/local/src/john/ModSecurity-master/nginx/modsecurity/ngx_http_modsecurity.c:952: error: ‘modsecurity_read_body_cb’ undeclared (first use in this function) make[1]: *** [objs/addon/modsecurity/ngx_http_modsecurity.o] Error 1 make[1]: Leaving directory `/usr/local/src/john/nginx-1.2.5' make: *** [build] Error 2 Note : I'm using nginx as the only webserver and i do not have apache installed. OS : Centos 6 64bit How can i solve this problem And do you have another easy way to install modsecurity with nginx ?

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

  • Php pdo_dblib - cannot find/unable to load freetds

    - by MaxPowers
    Self-hosted box, RHEL 6 PHP 5.3.3 PDO installed freetds installed pdo_dblib - so far no luck installing My goal is to use PDO with sybase. Attempting to install pdo_dblib from the appropriate version php source code. I have tried a variety of methods and searched quite a bit for help on this topic, but have yet to be successful. Method 1 Install freetds $ ./configure $ make $ su root Password: $ make install This is successful Install pdo_dblib inside the /ext/pdo_dblib folder: $ phpize $ ./configure $ make $ make test Error output: PHP Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 That doesn't look good...I researched this and found an interesting hack for this here. But changing pdo.ini to pdo_0.ini was not the solution, as I still got the same errors on make test. $ su $ make install Output: Installing shared extensions: /usr/lib64/php/modules/ That seems strange...and no, it doesn't actually install (not showing up on phpinfo after apache restart). Method 2 Install freetds following the instructions exactly, i add the prefix $ ./configure --prefix=/usr/local/freetds $ make $ su root Password: $ make install This is successful Install pdo_dblib inside the /ext/pdo_dblib folder: $ phpize $ ./configure --with-sybase=/usr/local/freetds This produces the following error at the bottom of the output ... checking for PDO_DBLIB support via FreeTDS... yes, shared configure: error: Cannot find FreeTDS in known installation directories Method 3 freetds ./configure variation (including or not include the --prefix...) did not change the result of this so I'll skip it. Install pdo_dblib pecl extension following the method specified here. pecl download pdo_dblib tar -xzvf PDO_DBLIB-1.0.tgz Removed the line, <dep type=”ext” rel=”ge” version=”1.0?>pdo</dep> Saved the package.xml file, and moved it in to the PDO_DBLIB directory. mv package.xml ./PDO_DBLIB-1.0 Navigated to the PDO_DBLIB directory, then installed the package from the directory. cd ./PDO_DBLIB-1.0 pecl install package.xml But, this command gives me the following error output, same as Method 2. checking for PDO_DBLIB support via FreeTDS... yes, shared configure: error: Cannot find FreeTDS in known installation directories ERROR: `/home/sybase/Install_items/pecl_pdo_dblib/PDO_DBLIB-1.0/configure' failed

    Read the article

  • Installing OpenLDAP on Fedora 12: ldap_bind: Invalid credentials (49)

    - by Arcturus
    Hello. I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • Tomcat SPNEGO authentication against Active Directory not working.

    - by Michael
    I'm trying to authenticate against AD using the http://spnego.sourceforge.net component with tomcat. I've created my SPN's "setspn.exe -A HTTP/servername SVCTomcat" & "setspn.exe -A HTTP/servername.fqdn.net SVCTomcat" I've created my krb5.conf & login.conf file and setup the filter in the web.xml ie. <filter-name>SpnegoHttpFilter</filter-name> <filter-class>net.sourceforge.spnego.SpnegoHttpFilter</filter-class> <param-name>spnego.allow.unsecure.basic</param-name> <param-value>false</param-value> <param-name>spnego.login.client.module</param-name> <param-value>spnego-client</param-value> <param-name>spnego.krb5.conf</param-name> <param-value>krb5.conf</param-value> <param-name>spnego.login.conf</param-name> <param-value>login.conf</param-value> <param-name>spnego.preauth.username</param-name> <param-value>SVCTomcat</param-value> <param-name>spnego.preauth.password</param-name> <param-value>Pasword</param-value> <param-name>spnego.login.server.module</param-name> <param-value>spnego-server</param-value> <param-name>spnego.prompt.ntlm</param-name> <param-value>false</param-value> <param-name>spnego.logger.level</param-name> <param-value>2</param-value> Note i've stripped extraneous tags from this, so it's not the actual XML. When i go to a page protected by this filter i get this in the catalina logfile. 25-Mar-2010 12:41:26 org.apache.catalina.startup.Catalina start INFO: Server startup in 4615 ms 25-Mar-2010 12:41:47 net.sourceforge.spnego.SpnegoHttpFilter doFilter FINE: principal=SYSTEM@TESTDOMAIN And in the hello_spnego.jsp example on the website it just reports the name of the user tomcat is running as (SYSTEM), not the user i'm connecting with. It seems the author stopped halfway through his debugging page, so i've no areas to look in other than to triple check my config. Any ideas?

    Read the article

  • svn 503 error when commit new files

    - by philipp
    I am struggling with a strange error when I try to commit my repository. I have a V-server with webmin installed on it. Via Webmin I installed an svn module, created repositories and everything worked fine until three days ago. Trying to commit brings the following error: Commit failed (details follow): Server sent unexpected return value (503 Service unavailable) in response to PROPFIND request for '/svn/rle/!svn/wrk/a1f963a7-0a33-fa48-bfde-183ea06ab958/RLE/.htaccess' Server sent unexpected return value (503 Service unavailable) in response to PROPFIND request for '/svn/rle/RLE/.htaccess' I google everywhere and found only very few solutions. One indicated that a wrong error document is set, another one dealt about the problem that filenames might cause this error and last but not least a wrong proxy configuration in the local svn config could be the reason. After trying all of the solutions suggested I could not reach anything. Only after a server reboot there was a little difference in the error-message, telling me that the server was not able to move a temp file, because the operation was permitted. So I also controlled the permissions of the svn directory, but also with no success. An svn update than restored the "normal" error from above and nothing changed since then. The only change I made on the server, I guess that this could be the reason why svn does not work anymore, was to install the php5_mysql module for apache via apt-get install php5_mysql. Atg the moment I have totally no idea where I could search. I don't know if the problem is on my server or in my repository and I would be glad to get any hint to solve this. Thanks in advance Greetings philipp error log: [Tue Oct 25 19:23:02 2011] [error] [client 217.50.254.18] Could not create activity /svn/rle/!svn/act/d8dd436f-d014-f047-8e87-01baac46a593. [500, #0] Tue Oct 25 19:23:02 2011] [error] [client 217.50.254.18] could not begin a transaction [500, #1] [Tue Oct 25 19:24:21 2011] [error] [client 217.50.254.18] Could not create activity /svn/rle/!svn/act/adac52c2-6f46-f540-b218-2f2ff03b51a4. [500, #0] http.conf: <Location /svn> DAV svn SVNParentPath /home/xxx/svn AuthType Basic AuthName xxx.de AuthUserFile /home/xxx/etc/svn.basic.passwd Require valid-user AuthzSVNAccessFile /home/xxx/etc/svn-access.conf Satisfy Any ErrorDocument 404 default RewriteEngine off </Location> The permissions for the repository directory are : rwxrwxrwx (0777). the directory /svn/rle/!svn/act/adac52c2-6f46-f540-b218-2f2ff03b51a4 does not exist on the server. I think this is part of the repository. So, I just just want to admit that i tried to reach the repository via Browser and i worked, I could see everything, so the error only occurs when I try to commit new files. I also created a second repository and tried to commit files in there, what gave me the same error.

    Read the article

  • iis not listening on port 80

    - by user57467
    We have server 2003 and ISA 2004 with IIS 6 on same machnie. Everything worked well till yesterday, when we try to make some new rule in ISA..but this is a long story... Unfortunatelly something happend with our intranet site. Our site is on the port 80, but if we try to open on this client machines then we got and error page (which error page is our provider): 403-forbidden; Remote host not listening, the remote host is not prepared to acceppt the connection request. On the server i can open the site with port 80. If i change the port number in the iis and try to open the site with the port, then works well. I try to shut down IIS and start apache with a simple page. On the server works well but in clients the problem is the same, so i think this is not an IIS related problem. In the ISA we have a web pub rule, with port 80, no auth. Im pulling out my hair, please help. after uninstall and reinstall ISA, de sites work well, till i configure the upstream proxy in the conf/network/web chaining menu and then everything went same... So something wrong with the web-proxy / upstream function... (all my http request forward to my upstream proxy). That was the set long time ago...but a few day ago somehing went wrong... I think maybee our ISP spoiled something..tomorrow i try to figure out... But one more thing: I make a new rule before the default rule in the conf/network/web chaining menu. Every request go to the server not redirected.. Redirect to upstream server.... So if the request goes to our server (our site) then handled locally, and if not then go to upstream proxy and voilllaaa....i tougth... But unfortunatelly: our website work well, but internet work extreamly slowly..:( Maybee with single adapter i can made this? I have to handle all request locally or i have to send all to upstream? I cant filter it?

    Read the article

  • Installing multiple php versions plus extensions on freebsd

    - by jgtumusiime
    I'm a currently learning how to work with freebsd. Lately I have been trying to run multiple php versions along with their respective packages. However, I seem to be running into issues while making installations. The default location for my php installation is /usr/local/etc/, however I want to be able to install php5.2, php5.3 and php5.4 in /usr/local/etc/php52, /usr/local/etc/php53 and /usr/local/etc/php54 respectively. Using ports I simply achieved this by doing cd /usr/ports/lang/php5x && make PREFIX="/usr/local/etc/php5x" install clean. The problem now is: How do I do the same for extensions of all my PHP versions? When I try installing php-extensions like so: cd /usr/ports/lang/php5x-extension && make PREFIX="/usr/local/etc/php5x/lib/php" install clean, I get this error ... ===> PHPizing for php53-bcmath-5.3.17 env: /usr/local/bin/phpize: No such file or directory *** Error code 127 Stop in /usr/ports/math/php53-bcmath. *** Error code 1 Stop in /usr/ports/lang/php53-extensions. My PHPize is located in /usr/local/etc/php5x/bin/phpize So how do I get make or whatever to look for phpize in the right path? Is there a cleaner, may be simpler way of maintaining multiple php installations? I need to achieve this because of compatibility issues from some legacy code that runs on 5.2 and breaks on 5.3. Thank you. ================= So I successfully installed an configured freebsd jail and I would like to install software within my jail but I cannot connect to the network. Here is my rc.conf jail_enable="YES" # Set to NO to disable starting of any jails jail_list="mambo2" # Space separated list of names of jails jail_mambo2_rootdir="/usr/jails/j01" # jail's root directory jail_mambo2_hostname="mambo2.ug" # jail's hostname jail_mambo2_ip="192.168.100.174" # jail's IP address jail_mambo2_devfs_enable="YES" # mount devfs in the jail jail_mambo2_devfs_ruleset="mambo2_ruleset" # devfs ruleset to apply to jail here is my jail ifconfig output mambo2# ifconfig rl0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 options=8<VLAN_MTU> ether 00:c1:28:00:48:db media: Ethernet autoselect (100baseTX <full-duplex>) status: active plip0: flags=108810<POINTOPOINT,SIMPLEX,MULTICAST,NEEDSGIANT> metric 0 mtu 1500 lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384 mambo2# I created a /etc/resolv.conf for nameservers mambo2# cat /etc/resolv.conf nameserver 192.168.100.251 nameserver 8.8.8.8 mambo2# Here is a list of jails running [root@mambo /usr/home/jtumusiime]# jls JID IP Address Hostname Path 5 192.168.100.174 mambo2.ug /usr/jails/j01 my host has 4 ip addresses, 3 public and one private: 192.168.100.173 I tried creating a jail using ezjail and this does not work out. [root@mambo /usr/home/jtumusiime]# ezjail-admin update -p -i Error: Cannot find your copy of the FreeBSD source tree in . Consider using 'ezjail-admin install' to create the base jail from an ftp server. [root@mambo /usr/home/jtumusiime]# I have an updated copy of freebsd 7.1 source in /usr/src/ and I did #make buildworld while building the first jail mambo2 Here is an excerpt of ouput of ezjail-admin install ... 221 Goodbye. Trying 193.162.146.4... Connected to ftp.freebsd.org. 220 ftp.beastie.tdk.net FTP server (Version 6.00LS) ready. 331 Guest login ok, send your email address as password. 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. 200 Type set to I. 550 pub/FreeBSD-Archive/old-releases/i386/7.1-RELEASE/base: No such file or directory. 221 Goodbye. Could not fetch base from ftp.freebsd.org. Maybe your release (7.1-RELEASE) is specified incorrectly or the host ftp.freebsd.org does not provide that release build. Use the -r option to specify an existing release or the -h option to specify an alternative ftp server. Querying your ftp-server... The ftp server you specified (ftp.freebsd.org) seems to provide the following builds: Trying 193.162.146.4... total 10 drwxrwxr-x 13 1006 1006 512 Feb 20 2011 8.2-RELEASE drwxrwxr-x 13 1006 1006 512 Apr 10 2012 8.3-RELEASE lrwxr-xr-x 1 1006 1006 16 Jan 7 2012 9.0-RELEASE -> i386/9.0-RELEASE drwxrwxr-x 7 1006 1006 1024 Feb 19 2012 ISO-IMAGES -rw-rw-r-- 1 1006 1006 637 Nov 23 2005 README.TXT drwxrwxr-x 5 1006 1006 512 Nov 2 02:59 i386 I do not want to upgrade my freebsd installation. I have googled around; but all on vail

    Read the article

  • How to setup Hadoop cluster so that it accepts mapreduce jobs from remote computers?

    - by drasto
    There is a computer I use for Hadoop map/reduce testing. This computer runs 4 Linux virtual machines (using Oracle virtual box). Each of them has Cloudera with Hadoop (distribution c3u4) installed and serves as a node of Hadoop cluster. One of those 4 nodes is master node running namenode and jobtracker, others are slave nodes. Normally I use this cluster from local network for testing. However when I try to access it from another network I cannot send any jobs to it. The computer running Hadoop cluster has public IP and can be reached over internet for another services. For example I am able to get HDFS (namenode) administration site and map/reduce (jobtracker) administration site (on ports 50070 and 50030 respectively) from remote network. Also it is possible to use Hue. Ports 8020 and 8021 are both allowed. What is blocking my map/reduce job submits from reaching the cluster? Is there some setting that I must change first in order to be able to submit map/reduce jobs remotely? Here is my mapred-site.xml file: <configuration> <property> <name>mapred.job.tracker</name> <value>master:8021</value> </property> <!-- Enable Hue plugins --> <property> <name>mapred.jobtracker.plugins</name> <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value> <description>Comma-separated list of jobtracker plug-ins to be activated. </description> </property> <property> <name>jobtracker.thrift.address</name> <value>0.0.0.0:9290</value> </property> </configuration> And this is in /etc/hosts file: 192.168.1.15 master 192.168.1.14 slave1 192.168.1.13 slave2 192.168.1.9 slave3

    Read the article

  • Configuring OpenLDAP as a Active Directory Proxy

    - by vadensumbra
    We try to set up an Active Directory server for company-wide authentication. Some of the servers that should authenticate against the AD are placed in a DMZ, so we thought of using a LDAP-server as a proxy, so that only 1 server in the DMZ has to connect to the LAN where the AD-server is placed). With some googling it was no problem to configure the slapd (see slapd.conf below) and it seemed to work when using the ldapsearch tool, so we tried to use it in apache2 htaccess to authenticate the user over the LDAP-proxy. And here comes the problem: We found out the username in the AD is stored in the attribute 'sAMAccountName' so we configured it in .htaccess (see below) but the login didn't work. In the syslog we found out that the filter for the ldapsearch was not (like it should be) '(&(objectClass=*)(sAMAccountName=authtest01))' but '(&(objectClass=*)(?=undefined))' which we found out is slapd's way to show that the attribute do not exists or the value is syntactically wrong for this attribute. We thought of a missing schema and found the microsoft.schema (and the .std / .ext ones of it) and tried to include them in the slapd.conf. Which does not work. We found no working schemata so we just picked out the part about the sAMAccountName and build a microsoft.minimal.schema (see below) that we included. Now we get the more precise log in the syslog: Jun 16 13:32:04 breauthsrv01 slapd[21229]: get_ava: illegal value for attributeType sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH base="ou=oraise,dc=int,dc=oraise,dc=de" scope=2 deref=3 filter="(&(objectClass=\*)(?sAMAccountName=authtest01))" Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH attr=sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Using our Apache htaccess directly with the AD via LDAP works though. Anyone got a working setup? Thanks for any help in advance: slapd.conf: allow bind_v2 include /etc/ldap/schema/core.schema ... include /etc/ldap/schema/microsoft.minimal.schema ... backend ldap database ldap suffix "ou=xxx,dc=int,dc=xxx,dc=de" uri "ldap://80.156.177.161:389" acl-bind bindmethod=simple binddn="CN=authtest01,ou=GPO-Test,ou=xxx,dc=int,dc=xxx,dc=de" credentials=xxxxx .htaccess: AuthBasicProvider ldap AuthType basic AuthName "AuthTest" AuthLDAPURL "ldap://breauthsrv01.xxx.de:389/OU=xxx,DC=int,DC=xxx,DC=de?sAMAccountName?sub" AuthzLDAPAuthoritative On AuthLDAPGroupAttribute member AuthLDAPBindDN CN=authtest02,OU=GPO-Test,OU=xxx,DC=int,DC=xxx,DC=de AuthLDAPBindPassword test123 Require valid-user microsoft.minimal.schema: attributetype ( 1.2.840.113556.1.4.221 NAME 'sAMAccountName' SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )

    Read the article

  • Local, Multiple-Blog (ie Dashboard) Blogging Software as Alternative to Blogger [closed]

    - by Synetech inc.
    FOR RE-OPENING: I don’t see how it is “too localized”. Plenty of people like to run their own web-apps instead of relying on third-party services. If that were not true, then WordPress, phpBB, Apache, PHP, etc. would not be available for general use. As for “Internet audience at large”, I must have missed the part where it was a rule that you are only allowed to ask for help for things that applies to everyone else too; I thought you were allowed to ask for help. Besides, if someone knows of software that fulfills the question, then it is relevant to whomever would download it, and so is not only applicable to an “extraordinarily narrow situation”. (Besides, the reason that I was asking was because Google had announced that it was discontinuing FTP support for Blogger and so many people were affected—read NOT TOO LOCALIZED—and were trying to find alternatives.) Hi, I am trying to find software (for Windows, PHP, MySQL/SQLite/flat, free, open-source) to localize all of my software and service so that I can keep my files and host when needed from my own system instead of some remote computer. I’ve already selected things like web, FTP, and db servers. I’ve chosen forum and wiki software, as well as an RCS system. At this point, all I’m still looking for—actually, I still need to choose bug-tracking software, but besides that—is blogging software. I still use Blogger and am trying to find something that I can use to import my Blogger stuff and store on (and publish to) my home system. I have read of various blogging software including WordPress, MovableType, and TextPattern. The problem is that I am trying to find something that is like Blogger (which from what I can tell is not available on Google Code as open-source). What I specifically need is multiple-blog support. That is, multiple blogs ala the Blogger Dashboard, not multiple user accounts (although that is important as well). The closest thing that I have been able to find is using Wordpress categories to simulate multiple blogs, but that’s not really what I want. I want software that I can run locally that has a multi-blog dashboard like Blogger. Any ideas? Thanks a lot!

    Read the article

  • enabling gzip with htaccess...why is it hit or miss?

    - by adam-asdf
    I have shared hosting through Justhost. I use the HTML5 Boilerplate .htaccess (have tried other methods from here and there without luck) the compression part is as follows: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # Compress all output labeled with one of the following MIME-types <IfModule mod_filter.c> AddOutputFilterByType DEFLATE application/atom+xml \ application/javascript \ application/json \ application/rss+xml \ application/vnd.ms-fontobject \ application/x-font-ttf \ application/xhtml+xml \ application/xml \ font/opentype \ image/svg+xml \ image/x-icon \ text/css \ text/html \ text/plain \ text/x-component \ text/xml </IfModule> </IfModule> However, it isn't working—at least I don't think—My home page (html) isn't compressing, the CSS and some of the JS aren't gzipped. It is failing on HTML, CSS and JS. However, some things are (or were, who knows what it will look like when you check) gzipped. My domain is http://adaminfinitum.com/ What is weird is that the (Google) PageSpeed browser extension for Firefox (whatever the current version is [Nov. 2012]) gives me a 95% speed rating (and no warnings about compression), yet YSlow and Chrome developer tools both flag me about gzip, as does a tool I found on here while researching this. To reduce cookies I set up a subdomain on my site and I thought maybe that was it so I added an .htaccess there also, but no luck. To reduce http requests I embedded some of webfonts and images in CSS (HTML5 BP stipulates not to compress images, and apparently '.woff' files are already compressed) so I thought maybe that was it and I spent all day separating and asynchronously loading those portions (via Modernizr.load) but that hasn't helped either...if anything it made it worse due to increasing http requests (I realize speed scores of async resources may be misleading). Researching this, it seems to be a fairly common issue but I haven't found an explanation/solution. I don't think it is a MIME-type issue, I have quadruple checked (and thrice edited) my .htaccess files. My hosting company said they run Apache 2.2.22 and I have looked at everything I can find. What gives?

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • Ubuntu web server 11.10 ftp/server issue

    - by Nate
    I was wondering if I could get some help with FTP, atleast I'm pretty sure it has to do with FTP. Although it could have to do with something else, I'm not 100% sure.. Now, for fare warning, I'm no ubuntu dominator, I'm pretty newb. Anyway, I've attempted to build a webserver to to test php and what not for a site I'm building. Now everything works, the php, the sql etc. By the way, I built this in VMware, so it's virtual, over a network, so I can access stuff from anywhere. I'm in a college right now so yeah. The one problem I have is this. I go into the terminal, and do ifconfig to find my IP. I get it and go to a browser on a different machine and type that IP in. I get the "index of/" page, where I can browse the website I'm making. I can click through folders and what not. I can click on things and they open up. Now lets say I'm working on my desktop and open up an FTP and drag and drop something into there, go to the IP in the browser again and try to open it. I either get "Server error The website encountered an error while retrieving http://my_server_ip/phpinfo.php. It may be down for maintenance or configured incorrectly. Here are some suggestions: Reload this webpage later." or "Forbidden You don't have permission to access /html.html on this server." But, lets say I make it on the server itself, and try, bam, magic it works. I'm sure I set the permissions to let everyone open and view the files, but maybe I didn't? I'm not sure, and this is where I was hoping I could get some help. By the way, I followed a tutorial on changing the www folder (apache) from /var/www to home/"user"/www. I can't recall how I did that, but it's there and my ftp goes to the home/"user"/www folder. But yeah, any and all help is appreciated. Like I said, I'm really new to this, but I do enjoy attempting to make these servers and learning how they work, so it's not like making this webserver is a project for a class, It's just assisting me in testing stuff for another class and possibly other websites later on down the road. Anyway, anyone who decides to help, thanks so much, I'd really appreciate it. Nate. P.S. I'm using Ubuntu 11.10 desktop edition with a LAMP server

    Read the article

  • Properly Configured Rsyslog on CentOS

    - by Gaia
    I'm trying to configure Rsyslog 5.8.10 on CentOS 6.4 to send Apache's error and access logs to a remote server. It's working, but I have a couple questions. A) I would like to use as few queues (and resources) as possible. I send error logs to server A, send access logs to server A, send both logs in one stream to server B. Should I specify one queue per external service (2 queues) or one queue per stream (3 queues, as I have now)? This is what I have: $ActionResumeInterval 10 $ActionQueueSize 100000 $ActionQueueDiscardMark 97500 $ActionQueueHighWaterMark 80000 $ActionQueueType LinkedList $ActionQueueFileName logglyaccessqueue $ActionQueueCheckpointInterval 100 $ActionQueueMaxDiskSpace 1g $ActionResumeRetryCount -1 $ActionQueueSaveOnShutdown on $ActionQueueTimeoutEnqueue 10 $ActionQueueDiscardSeverity 0 if $syslogtag startswith 'www-access' then @@logs-01.loggly.com:514;logglyaccess $ActionResumeInterval 10 $ActionQueueSize 100000 $ActionQueueDiscardMark 97500 $ActionQueueHighWaterMark 80000 $ActionQueueType LinkedList $ActionQueueFileName logglyerrorsqueue $ActionQueueCheckpointInterval 100 $ActionQueueMaxDiskSpace 1g $ActionResumeRetryCount -1 $ActionQueueSaveOnShutdown on $ActionQueueTimeoutEnqueue 10 $ActionQueueDiscardSeverity 0 if $syslogtag startswith 'www-errors' then @@logs-01.loggly.com:514;logglyerrors $DefaultNetstreamDriverCAFile /etc/syslog.papertrail.crt # trust these CAs $ActionSendStreamDriver gtls # use gtls netstream driver $ActionSendStreamDriverMode 1 # require TLS $ActionSendStreamDriverAuthMode x509/name # authenticate by hostname $ActionResumeInterval 10 $ActionQueueSize 100000 $ActionQueueDiscardMark 97500 $ActionQueueHighWaterMark 80000 $ActionQueueType LinkedList $ActionQueueFileName papertrailqueue $ActionQueueCheckpointInterval 100 $ActionQueueMaxDiskSpace 1g $ActionResumeRetryCount -1 $ActionQueueSaveOnShutdown on $ActionQueueTimeoutEnqueue 10 $ActionQueueDiscardSeverity 0 *.* @@logs.papertrailapp.com:XXXXX;papertrailstandard & ~ B) Does a queue block get used over and over by every send action that follows it or only by the first one or only until it encounters a send followed by a discard action (~)? C) How do I reset a queue block so that an upcoming send action does not use a queue at all? D) Does a TLS block get used over and over by every send action that follows it or only by the first one or only until it encounters a send followed by a discard action (~)? E) How do I reset a TLS block so that an upcoming send action does not use TLS at all? F) If I run rsyslog -N1 I get: rsyslogd -N1 rsyslogd: version 5.8.10, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: WARNING: rsyslogd is running in compatibility mode. Automatically generated config directives may interfer with your rsyslog.conf settings. We suggest upgrading your config and adding -c5 as the first rsyslogd option. rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad immark rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: MarkMessagePeriod 1200 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad imuxsock rsyslogd: End of config validation run. Bye. Where do I place the -c5 so that it doesnt run in compatibility mode anymore?

    Read the article

  • Guests can't access KVM host server by name although nslookup and dig returns correct record

    - by user190196
    So I have a KVM host that also runs an apache server with some yum repos. The VM guests are connected to the default virtual network, which is configured to offer DHCP and forwarding with NAT on virbr0 (192.168.12.1). The guests can successfully access the yum repos on the host by IP address, so for example curl 192.168.122.1/repo1 returns the content without problems. But I'd like to have the guests be able to reach the web server on the host by name rather IP address. I added the desired name record to the host's /etc/hosts file and libvirt's dnsmasq service seems to be serving that correctly to the guests since nslookup and dig successfully resolve the name on the guests: [root@localhost ~]# nslookup repo Server: 192.168.122.1 Address: 192.168.122.1#53 Name: repo Address: 192.168.122.1 [root@localhost ~]# dig repo ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> repo ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55938 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;repo. IN A ;; ANSWER SECTION: repo. 0 IN A 192.168.122.1 ;; Query time: 0 msec ;; SERVER: 192.168.122.1#53(192.168.122.1) ;; WHEN: Tue Sep 17 02:10:46 2013 ;; MSG SIZE rcvd: 38 But curl/ping/etc still fail: [root@localhost ~]# curl repo curl: (6) Couldn't resolve host 'repo' While a request via ip address works: [root@localhost ~]# curl 192.168.122.1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> [...] Same with ping: [root@localhost ~]# ping repo ping: unknown host repo [root@localhost ~]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.110 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.146 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.191 ms ^C --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2298ms rtt min/avg/max/mdev = 0.110/0.149/0.191/0.033 ms I tried adding repo 192.168.122.1 to the guests' /etc/hosts files but still no dice. Also tried changing guests' /etc/nsswitch.conf with both: hosts: files dns and hosts: dns files I've read the relevant libvirt documentation and I'm not sure where else to learn more about this and be able to move forward with it.

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    Hello, I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >