Search Results

Search found 10610 results on 425 pages for 'apache hadoop'.

Page 13/425 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How do I find out original ./configure compile time directives for Apache?

    - by evilknot
    I've inherited an abandoned server, and I need to upgrade Apache/OpenSSL. No one knows the original configure options that were used to compile it, and the original admin is long gone. PHP is not compiled in, so phpinfo()'s out. http -l and httpd -V do some good, but not enough to rebuild all of the ./configure line. I need to get all of the arguments that were used to build it including the "enable" parameters,etc.. Where does phpinfo() get this from? Is there another way to find it? Thanks!

    Read the article

  • Open Source Survey: Oracle Products on Top

    - by trond-arne.undheim
    Oracle continues to work with the open source community to bring the most innovative and productive software to market (more). Oracle products received the most votes in several key categories of the 2010 Linux Journal Reader's Choice Awards. With over 12,000 technologists reporting, these product earned top spots: Best Office Suite: OpenOffice.org Best Single Office Program: OpenOffice.org Writer Best Database: MySQL Best Virtualization Solution: VirtualBox "As the leading open source technology and service provider, Oracle continues to work with the community stakeholders to rapidly innovate many open source products for use in fully tested production environments," says Edward Screven, Oracle's chief corporate architect. "Supporting open source is important to Oracle and our customers, and we continue to invest in it." According to a recent report by the Linux Foundation, Oracle is one of the top ten contributors to the Linux Kernel. Oracle also contributes millions of lines of code to these important projects: OpenJDK: 7,002,579 Eclipse: 1,800,000 (#3 in active committers) MySQL: 5,073,113 NetBeans: 7,870,446 JSF: 701,980 Apache MyFaces Trinidad: 1,316,840 Hudson: 1,209,779 OpenOffice.org: 7,500,000

    Read the article

  • Apache not Forwarding Client x509 Certificate to Tomcat via mod_proxy

    - by hooknc
    Hi Everyone, I am having difficulties getting a client x509 certificate to be forwarded to Tomcat from Apache using mod_proxy. From observations and reading a few logs it does seem as though the client x509 certificate is being accepted by Apache. But, when Apache makes an SSL request to Tomcat (which has clientAuth="want"), it doesn't look like the client x509 certificate is passed during the ssl handshake. Is there a reasonable way to see what Apache is doing with the client x509 certificate during its handshake with Tomcat? Here is the environment I'm working with: Apache/2.2.3 Tomcat/6.0.29 Java/6.0_23 OpenSSL 0.9.8e Here is my Apache VirtualHost SSL config: <VirtualHost xxx.xxx.xxx.xxx:443> ServerName xxx ServerAlias xxx SSLEngine On SSLProxyEngine on ProxyRequests Off ProxyPreserveHost On ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel debug SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /usr/local/certificates/xxx.crt SSLCertificateKeyFile /usr/local/certificates/xxx.key SSLCertificateChainFile /usr/local/certificates/xxx.crt SSLVerifyClient optional_no_ca SSLOptions +ExportCertData CustomLog logs/ssl_request_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass / https://xxx.xxx.xxx.xxx:8443/ ProxyPassReverse / https://xxx.xxx.xxx.xxx:8443/ </VirtualHost> Then here is my Tomcat SSL Connector: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" address="xxx.xxx.xxx.xxx" maxThreads="150" scheme="https" secure="true" keystoreFile="/usr/local/certificates/xxx.jks" keypass="xxx_pwd" clientAuth="want" sslProtocol="TLSv1" proxyName="xxx.xxx.xxx.xxx" proxyPort="443" /> Could there possibly be issues with SSL Renegotiation? Could there be problems with the Truststore in our Tomcat instance? (We are using a non-standard Truststore that has partner organization CAs.) Is there better logging for what is happening internally with Apache for SSL? Like what is happening to the client cert or why it isn't forwarding the certificate when tomcats asks for one? Any reasonable assistance would be greatly appreciated. Thank you for your time.

    Read the article

  • Hadoop/Pig Cross-join

    - by sagie
    Hi I am using Pig to cross join two data sets, both with format. This will result as . In my case, if I have both tuples and , it is a duplication. Can I filter those duplications (or not joining them at all)? thanks, Sagie

    Read the article

  • java.io.FileNotFoundException: /target/test.log

    - by sword101
    Greetings all I am using Apache Camel and Apache CXF in this example: http://camel.apache.org/better-jms-transport-for-cxf-webservice-using-apache-camel.data/cxfcamelexample.zip I followed the readme and when tried to run the client & server classes i got this exception: log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /target/test.log (No such file or directory) at java.io.FileOutputStream.openAppend(Native Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:177) at java.io.FileOutputStream.<init>(FileOutputStream.java:102) at org.apache.log4j.FileAppender.setFile(FileAppender.java:289) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:163) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:256) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:132) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:96) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:654) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:612) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:509) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:415) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:441) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:470) at org.apache.log4j.LogManager.<clinit>(LogManager.java:122) at org.apache.log4j.Logger.getLogger(Logger.java:104) at org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:283) at org.apache.commons.logging.impl.Log4JLogger.<init>(Log4JLogger.java:108) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.commons.logging.impl.LogFactoryImpl.createLogFromClass(LogFactoryImpl.java:1040) at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:838) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:601) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:333) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:307) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:645) at org.springframework.context.support.AbstractApplicationContext.<init>(AbstractApplicationContext.java:146) at org.springframework.context.support.AbstractRefreshableApplicationContext.<init>(AbstractRefreshableApplicationContext.java:84) at org.springframework.context.support.AbstractRefreshableConfigApplicationContext.<init>(AbstractRefreshableConfigApplicationContext.java:59) at org.springframework.context.support.AbstractXmlApplicationContext.<init>(AbstractXmlApplicationContext.java:58) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:136) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:93) at com.example.customerservice.impl.CustomerServiceClient.main(CustomerServiceClient.java:34) so any ideas, how to solve this exception ?

    Read the article

  • Connection to webservice times out first time

    - by Neo
    My application needs to connect to a web service. The WSDL file given by the client was converted to java using the wsdl2java utility in axis 2-1.5.2. The problem occurs during the first connection to the webservice. It gives me java.net.SocketTimeoutException: Read timed out at jrockit.net.SocketNativeIO.readBytesPinned(Native Method) at jrockit.net.SocketNativeIO.socketRead(SocketNativeIO.java:46) at java.net.SocketInputStream.socketRead0(SocketInputStream.java) at java.net.SocketInputStream.read(SocketInputStream.java:129) at com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293) at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:331) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:789) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:747) at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:238) at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78) at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106) at org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1116) at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.readLine(MultiThreadedHttpConnectionManager.java:1413) at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1974) at org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1735) at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1100) at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398) at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:346) at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:558) at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:199) at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:77) at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:400) at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:225) at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:438) at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:402) at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:230) at org.apache.axis2.client.OperationClient.execute(OperationClient.java:166) at com.jmango.webservice.talker.WCFServiceStub.addSaleSupportRequest(WCFServiceStub.java:270) at com.jmango.domain.salessystem.talkerimp.RequestServiceInfoImp.addanewServiceRequest(RequestServiceInfoImp.java:58) at com.jmango.mobilenexus.service.MobileServiceImp.sendQueryforServiceInfo(MobileServiceImp.java:358) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.remoting.support.RemoteInvocationTraceInterceptor.invoke(RemoteInvocationTraceInterceptor.java:77) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy8.sendQueryforServiceInfo(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.caucho.hessian.server.HessianSkeleton.invoke(HessianSkeleton.java:180) at com.caucho.hessian.server.HessianSkeleton.invoke(HessianSkeleton.java:110) at org.springframework.remoting.caucho.Hessian2SkeletonInvoker.invoke(Hessian2SkeletonInvoker.java:94) at org.springframework.remoting.caucho.HessianExporter.invoke(HessianExporter.java:142) at org.springframework.remoting.caucho.HessianServiceExporter.handleRequest(HessianServiceExporter.java:70) at org.springframework.web.servlet.mvc.HttpRequestHandlerAdapter.handle(HttpRequestHandlerAdapter.java:50) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:512) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:718) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:111) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:291) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:776) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:705) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:899) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690) at java.lang.Thread.run(Thread.java:619) I tried searching the web for answers though there was one place which mentions it could be the firewall at the webservice end that is blocking, I wasnt able to find a valid solution. Any help will be much appreciated. Running: Apache Tomcat 6.0 Axis2 1.5.2

    Read the article

  • How To Set Up A Loadbalanced High-Availability Apache Cluster On Windows

    - by bReAd
    Setting up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another “Single Point Of Failure”, we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently. The following setup is proposed: Apache node 1: webserver1.example.com (webserver1) – IP address: 192.168.0.101; Apache document root: /var/www Apache node 2: webserver2.example.com (webserver2) – IP address: 192.168.0.102; Apache document root: /var/www Load Balancer node 1: loadb1.example.com (loadb1) – IP address: 192.168.0.103 Load Balancer node 2: loadb2.example.com (loadb2) – IP address: 192.168.0.104 Virtual IP Address: 192.168.0.105 (used for incoming requests) Currently, there are many solutions for Linux machines and there aren't any on windows. I've tried searching a long time for solutions on Windows platform How do I create the virtual IP in windows and perform monitoring and make the load balancer listen to the virtual IP Address?

    Read the article

  • apache permission errors

    - by Wilduck
    I'm trying to set up Apache on a arch-linux box as a testing environment (I'm only using the localhost, not trying to serve anything to the greater web). When setting up Django with mod_wsgi, it recommended that I set up a WSGIScriptAlias from / to /usr/local/django/mysite/apache/django.wsgi . I've done this, as well as added the /usr/.../apache directory to my httpd.conf. When I try to access http://localhost I get a 403 forbidden error. I have no idea why this is happening. Things I've tried so far: 1) chown -R http .../apache 2) chmod -R 777 .../apache 3) using a simple Alias directive to host a static file from that directory. None of these have worked. I'm at a loss for what I'm doing wrong. Below is a relevant excerpt from my httpd.conf: Alias / /usr/local/django/mysite/apache <Directory "/usr/local/django/mysite/apache"> Order deny,allow Allow from all </Directory> So my question is: what am I doing wrong?

    Read the article

  • Apache debugging: where to find error logs?

    - by AP257
    I'm new to Apache and web serving generally, so apologies if this is a very stupid question. I want to configure a new sub-domain on a working site and install a forum there. I'm using a Debian server that already has Apache, mod_wsgi and a bunch of virtual hosts successfully running on it. I first installed my forum app (Django's OSQA). Following the OSQA instructions, I then created an Apache config file that specified ServerName as the new sub-domain. I also created a .wsgi file for the app, and pointed WSGIScriptAlias at it. I then restarted Apache. However, when I go to the new sub-domain, I get a 404 error message. Two questions: Is there a step missing above? Or is simply creating a new Apache config file in sites-available enough to 'tell' Apache about a new sub-domain? If there's something else going wrong, how can I debug it? The ErrorLog and CustomLog specified in the config file are both blank. apache2.conf, which I guess is Apache-wide configuration, specifies ErrorLog /var/log/apache2/error.log, but this is yet another blank file.

    Read the article

  • Hadoop: Iterative MapReduce Performance

    - by S.N
    Is it correct to say that the parallel computation with iterative MapReduce can be justified only when the training data size is too large for the non-parallel computation for the same logic? I am aware that the there is overhead for starting MapReduce jobs. This can be critical for overall execution time when a large number of iterations is required. I can imagine that the sequential computation is faster than the parallel computation with iterative MapReduce as long as the memory allows to hold a data set in many cases. Is it the only benefit to use the iterative MapReduce? If not, what are the other benefits could be?

    Read the article

  • Hbase / Hadoop Query Help

    - by zechariahs
    I'm working on a project with a friend that will utilize Hbase to store it's data. Are there any good query examples? I seem to be writing a ton of Java code to iterate through lists of RowResult's when, in SQL land, I could write a simple query. Am I missing something? Or is Hbase missing something?

    Read the article

  • Hadoop 0.2: How to read outputs from TextOutputFormat?

    - by S.N
    My reducer class produces outputs with TextOutputFormat (the default OutputFormat given by Job). I like to consume this outputs after the MapReduce job complete to aggregate the outputs. In addition to this, I like to write out the aggregated information with TextInputFormat so that the output from this process can be consumed by the next iteration of MapReduce task. Can anyone give me an example on how to write & read with TextFormat? By the way, the reason why I am using TextFormat, rather Sequence, is the interoperability. The outputs should be consumed by any software.

    Read the article

  • Comparing Nginx+PHP-FPM to Apache-mod_php

    - by Rushi
    I'm running Drupal and trying to figure out the best stack to serve it. Apache + mod_php or Nginx + PHP-FPM I used ApacheBench (ab) and Siege to test both setups and I'm seeing Apache performing better. This surprises me a little bit since I've heard a lot of good things about Nginx + PHP-FPM. My current Nginx setup is something that is a bit out of the box, and the same goes for PHP-FPM What optimizations I can make to speed up the Nginx + PHP-FPM combo over Apache and mo_php ? In my tests using ab, Apache is outperforming Nginx significantly (higher requets/second and finishing tests much faster) I've googled around a bit, but since I've never using Nginx, PHP-FPM or FastCGI, I don't exactly know where to start PHP v5.2.13, Drupal v6, latest PHP-FPM and Nginx compiled from source. Apache v2.0.63 ApacheBench Nginx + PHP-FPM Server Software: nginx/0.7.67 Server Hostname: test2.com Server Port: 80 Concurrency Level: 25 ---> Time taken for tests: 158.510008 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 ---> Requests per second: 6.31 [#/sec] (mean) Time per request: 3962.750 [ms] (mean) Time per request: 158.510 [ms] (mean, across all concurrent requests) Transfer rate: 181.38 [Kbytes/sec] received ApacheBench Apache using mod_php Server Software: Apache/2.0.63 Server Hostname: test1.com Server Port: 80 Concurrency Level: 25 --> Time taken for tests: 63.556663 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 --> Requests per second: 15.73 [#/sec] (mean) Time per request: 1588.917 [ms] (mean) Time per request: 63.557 [ms] (mean, across all concurrent requests) Transfer rate: 103.94 [Kbytes/sec] received

    Read the article

  • Splitting input into substrings in PIG (Hadoop)

    - by Niels Basjes
    Assume I have the following input in Pig: some And I would like to convert that into: s so som some I've not (yet) found a way to iterate over a chararray in pig latin. I have found the TOKENIZE function but that splits on word boundries. So can "pig latin" do this or is this something that requires a Java class to do that?

    Read the article

  • hadoop mapper static initialisation

    - by rakeshr
    Hi, I have a code fragment in which I am using a static code block to initialize a variable. public static class JoinMap extends Mapper<IntWritable, MbrWritable, LongWritable, IntWritable> { ....... public static RTree rt = null; static { String rtreeFileName = "R.rtree"; rt = new RTree(rtreeFileName); } public void map(IntWritable key, MbrWritable mbr,Context context) throws IOException, InterruptedException { ......... List elements = rt.overlaps(mbr.getRect()); ....... } } My problem is that the variable rt in the above code fragment is not getting initialised. Can anybody suggest a fix or an alternate way to initialise the variable. I don't want to initialise it inside my map function since that slows down the entire process.

    Read the article

  • Global variables in hadoop.

    - by Deepak Konidena
    Hi, My program follows a iterative map/reduce approach. And it needs to stop if certain conditions are met. Is there anyway i can set a global variable that can be distributed across all map/reduce tasks and check if the global variable reaches the condition for completion. Something like this. While(Condition != true){ Configuration conf = getConf(); Job job = new Job(conf, "Dijkstra Graph Search"); job.setJarByClass(GraphSearch.class); job.setMapperClass(DijkstraMap.class); job.setReducerClass(DijkstraReduce.class); job.setOutputKeyClass(IntWritable.class); job.setOutputValueClass(Text.class); } Where condition is a global variable that is modified during/after each map/reduce execution.

    Read the article

  • Need help implementing this algorithm with map reduce(hadoop)

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • Hadoop: Mapping binary files

    - by restrictedinfinity
    Typically in a the input file is capable of being partially read and processed by Mapper function (as in text files). Is there anything that can be done to handle binaries (say images, serialized objects) which would require all the blocks to be on same host, before the processing can start.

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • I need help debugging apache with gdb (no debugging symbols found)

    - by blockhead
    I'm trying to debug a segfault in PHP/Apache running on RHEL4. This is a production server, so I'm trying to install a separate copy of apache and php, and run apache through gdb. When I load httpd through gdb and then do run -X, I get the error (no debugging symbols found)...Error while reading shared library symbols: Dwarf Error: Cannot handle DW_FORM_strp in DWARF reader. The Apache manual says something about setting EXTRA_CFLAGS to "-g", and i've tried configuring with --enable-maintainer-mode, but I don't seem to be getting far.

    Read the article

  • Installed Apache. Bash: 'service httpd status' does nothing,

    - by Josh
    I just installed Apache 2 on CentOS5 from source (httpd-2.2.15.tar.gz) using: ./configure --prefix=/usr/local/apache make make install /usr/local/apache/bin/apachectl start I have verified that httpd is running in ps, and verified it is serving the default htdocs page. However, Apache is not found in 'service --status-all' and is not found in '/etc/init.d', so I cannot run 'service httpd status' or '/etc/init.d/httpd start', and other commands. Any ideas what I am missing?

    Read the article

  • Apache LDAP authentication (mod_auth_ldap) on MacOS Server (10.5)

    - by Ursid
    A - Is there a LDAP authentication module (mod_auth_ldap) for the version of Apache that comes built into MacOS Server 10.5? (I'm pretty sure no, but maybe someone compiled one.) B - If not, can it be compiled into MacOS' version of Apache? (Man, that would be nice.) 3 - If I can't use the Apple version of Apache for this, what is the best way to get Apache LDAP authentication working on MacOS Server 10.5? (Preferably one that works with MacOS Servers management software)

    Read the article

  • browser blocking image download when nginx was placed infront of apache to serve static content

    - by railscoder
    I was tying to place nginx infront of apache to server static content. This set up was performing better than just having apache. but suddenly some change caused images getting blocked for like 2-3sec before actually downloading with apache+nginx setup. It doesnt happen with apache only set up? Any idea why it is happening with nginx? this was happening even i removed all external js from the page

    Read the article

  • Changing User/Group to allow PHP to chmod/rename and move_upload_file()

    - by moe
    Hi There, It seems like I cannot do anything with my PHP script on my VPS. It returns 'Permission denied' when I try to upload something to a directory. Yes, I have changed the permission to 777, and it works, but I do not like the insecurity When running the command: ps axu|grep apache|grep -v grep It returns nobody 7689 0.1 3.8 50604 20036 ? S 21:38 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 13600 0.0 3.8 50304 20348 ? Ss Jun06 0:46 /usr/local/apache/bin/httpd -k start -DSSL nobody 15733 0.1 3.8 50700 20156 ? S 21:39 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 15818 0.1 3.8 51492 20180 ? S 21:39 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 23843 0.1 3.7 51336 19592 ? S 21:40 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30335 0.0 3.5 50436 18496 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30406 0.0 3.5 50444 18544 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30407 0.0 3.5 50556 18696 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30472 0.0 3.6 50828 19348 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30474 0.0 3.5 50668 18868 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30476 0.0 3.6 50532 19064 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 30501 0.0 3.8 50556 20080 ? S 21:36 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32341 0.0 3.5 50444 18492 ? S 21:41 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32370 0.0 3.5 50444 18476 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32414 0.1 3.7 51336 19524 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32416 0.1 3.5 50668 18816 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32457 0.1 3.6 50828 19320 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32458 0.1 3.6 50772 19276 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32459 0.0 3.5 50444 18504 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32460 0.2 3.6 50828 19320 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32463 0.0 3.5 50444 18472 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 32466 0.0 3.4 50436 17960 ? S 21:42 0:00 /usr/local/apache/bin/httpd -k start -DSSL The owner of the directory is 'user [505]' and the group is 'user[508]' (as seen in WinSCP) What can I do to change the Apache Handler to the right owner and group to allow my PHP scripts to work? P.S My PHP is not set to safe mode, and the open_basedir is set to no value

    Read the article

  • need to delete files owned by apache - unsure how to do that

    - by Brad
    Running apache on rhel server and sometimes I need to delete some files that I can not using my FTP program, because the FTP account I am logged into is not the apache user. I am on a Mac and there must be a way to accomplish this via terminal by either SSH'ing into the server. What credentials would I need to ssh into the server and delete the files/folders owned by apache Screen shot will show you what I mean when the file is owned by the apache user/group: http://cl.ly/e2192e6aadc8e4688c33 Any help is appreciated.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >