Search Results

Search found 10610 results on 425 pages for 'apache hadoop'.

Page 255/425 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Tomcat startup logs - SEVERE: Error filterStart how to get a stack trace?

    - by Benju
    When I start Tomcat I get the following error: Jun 10, 2010 5:17:25 PM org.apache.catalina.core.StandardContext start SEVERE: Error filterStart Jun 10, 2010 5:17:25 PM org.apache.catalina.core.StandardContext start SEVERE: Context [/mywebapplication] startup failed due to previous errors It seems odd that the logs for Tomcat would not include a stack trace. Does somebody have a suggestion for how to increase the logging in Tomcat to get stack traces for errors like this?

    Read the article

  • Unable to start Tomcat 6.x when Mac OS X 10.6 is boot up

    - by SkyEagle888
    I am using Mac OS X 10.6 and installed MAMP and Tomcat. My Tomcat is installed in /Users/(userID)/Tomcat I can start Tomcat server in Terminal without a problem But it cannot be started automatically when Mac OS X is boot up. I put a file org.apache.tomcat.plist in /Library/LaunchDaemons Disabled Label org.apache.tomcat ProgramArguments /Users/henryfok/Tomcat/bin/startup.sh RunAtLoad Any hint ?

    Read the article

  • Tomcat does not persist UserPrinciple during restart

    - by mabuzer
    How to force Tomcat to serialize UserPrinciple so that the user is kept logged in when Tomcat has restarted? Right now the user has to login again everytime. Added the following lines into web-app context.xml: <Manager className="org.apache.catalina.session.PersistentManager"> <Store className="org.apache.catalina.session.FileStore"/> </Manager> but still I see login page after Tomcat restart, I use Tomcat 6.0.26

    Read the article

  • Tomcat does not persist Session during restart

    - by mabuzer
    How to force Tomcat to serialize Session so that the user is kept logged in when Tomcat has restarted? Right now the user has to login again everytime. Added the following lines into web-app context.xml: <Manager className="org.apache.catalina.session.PersistentManager"> <Store className="org.apache.catalina.session.FileStore"/> </Manager> but still I see login page after Tomcat restart, I use Tomcat 6.0.24

    Read the article

  • SharePoint List Service Recursive not working

    - by stranger001
    Hi, I am using the following code to retrieve the documents in a list. Its working fine. However, it only returns documents and folders in root of the doc library. Is there any thing wrong I am doing here? I am looking for files in sub folders with recursive mode. Service service = new Service(); service.setMaintainSession(true); call = (Call) service.createCall(); call.setTargetEndpointAddress( new java.net.URL("<host>/_vti_bin/lists.asmx") ); call.setOperationName(new QName("http://schemas.microsoft.com/sharepoint/soap/","GetListItems")); call.setProperty(Call.SOAPACTION_USE_PROPERTY, new Boolean("true")); call.setProperty(Call.SOAPACTION_URI_PROPERTY,"http://schemas.microsoft.com/sharepoint/soap/GetListItems"); call.addParameter(new javax.xml.namespace.QName("http://schemas.microsoft.com/sharepoint/soap/", "listName"), new javax.xml.namespace.QName("http://www.w3.org/2001/XMLSchema", "string"), java.lang.String.class, javax.xml.rpc.ParameterMode.IN); MessageElement me = new MessageElement(new QName("QueryOptions")); me.addChildElement(new MessageElement(new QName( "IncludeMandatoryColumns"))).addTextNode("true"); me.addChildElement(new MessageElement(new QName( "ViewAttributes"))).addAttribute(javax.xml.soap.SOAPFactory.newInstance().createName("Scope"), "Recursive"); MessageElement[] me1 = {me}; String strMyString = "" + "<Query>" + "<OrderBy><FieldRef Name=\"ows_Modified\" Ascending=\"TRUE\" /></OrderBy>" + "</Query>"; MessageElement[] meArray = { getMeFromString(strMyString) };// Array call.addParameter("query",org.apache.axis.Constants.XSD_SCHEMA, javax.xml.rpc.ParameterMode.IN); call.addParameter("queryOptions",org.apache.axis.Constants.XSD_SCHEMA, javax.xml.rpc.ParameterMode.IN); call.setReturnType(org.apache.axis.Constants.XSD_SCHEMA); Schema ret = (Schema)call.invoke(new Object[] {"listGUID",meArray, me1 }); public org.apache.axis.message.MessageElement getMeFromString(final String strMyString) { DocumentBuilder docBuilder = null; try { docBuilder = DocumentBuilderFactory.newInstance().newDocumentBuilder(); } catch (final ParserConfigurationException e) { e.printStackTrace(); } catch (final FactoryConfigurationError e) { e.printStackTrace(); } final StringReader reader = new StringReader(strMyString); final InputSource inputsource = new InputSource(reader); Document doc = null; try { doc = docBuilder.parse(inputsource); } catch (final SAXException e) { e.printStackTrace(); } catch (final IOException e) { e.printStackTrace(); } final Element ele = doc.getDocumentElement(); final MessageElement msg = new MessageElement(ele); return msg; }

    Read the article

  • MySQL connection timeout

    - by NikolayGS
    I'm running program at apache tomcat server, that should be on permanently, but every morning(the client part isn't accessible at night) i receive error message (in apache tomcat console) that MySQL server is off. So is there any way to prevent this? Thanks in advance!

    Read the article

  • Does grails support logging from the src/java classes?

    - by rainyday
    I have a grails app (v 1.1.2) the logging is working fine from the groovy classes, but I can't get it to work from within a java source... I have a class in package com.mforms.devices., it imports apache log4j, defines the logger as follows private final org.apache.log4j.Logger loggy = Logger.getLogger(this.getClass()); then refer to it later by doing loggy.error("...") my Config.groovy has the following log4j = { error 'com.mforms' root { error 'stdout', 'file' additivity = true } } What am I doing wrong?!?!

    Read the article

  • Django - problem with {% url facebook_xd_receiver %}

    - by Gaurav
    I'm using {% url facebook_xd_receiver %} in one of my HTML files. This works just fine when I run my project using the command python manage.py runserver But the same project stops running and gives me a "TemplateSyntaxError" at the line {% url facebook_xd_receiver %} Can anyone please tell me what could be the difference between the dev server run through the command line and the apache server. Is there anything I'm missing out on while configuring the Apache server? Or is it a Django problem?

    Read the article

  • Computer Science or Computer Engineering for Data Science and Machine Learning

    - by ATMathew
    I'm a 25 year old data consultant who is considering returning to school to get a second bachelors degree in computer science or engineering. My interest is data science and machine learning. I use programming as a means to an end, and use languages like Python, R, C, Java, and Hadoop to find meaning in large data sets. Would a computer science or computer engineering degree be better for this? I realize that a statistics degree may be even more beneficial, but I'll be at a school which dosn't have a stats department or a computational math department.

    Read the article

  • Live from ODTUG - Big Data and SQL session #2

    - by Jean-Pierre Dijcks
    Sitting in Dominic Delmolino's session at ODTUG (KScope 12). If the session count at conferences is any indication then we will see more and more people start to deploy MapReduce in the database. And yes, that would be with SQL and PL/SQL first and foremost. Both Dominic and our own Bryn Llewellyn are doing MapReduce in the database presentations.  Since I have seen both, I would advice people to first look through Dominic's session to get a good grasp on what mappers do and what reducers do, then dive into Bryn's for a bunch of PL/SQL example. The thing I like about Dominic's is the last slide (a recursive WITH statement) to do this in SQL... Now I am hoping that next year we will see tools vendors show off how they work with Hadoop and MapReduce (at least talking about the concepts!!).

    Read the article

  • How can I unit test a class which requires a web service call?

    - by Chris Cooper
    I'm trying to test a class which calls some Hadoop web services. The code is pretty much of the form: method() { ...use Jersey client to create WebResource... ...make request... ...do something with response... } e.g. there is a create directory method, a create folder method etc. Given that the code is dealing with an external web service that I don't have control over, how can I unit test this? I could try and mock the web service client/responses but that breaks the guideline I've seen a lot recently: "Don't mock objects you don't own". I could set up a dummy web service implementation - would that still constitute a "unit test" or would it then be an integration test? Is it just not possible to unit test at this low a level - how would a TDD practitioner go about this?

    Read the article

  • Big Data Appliance

    - by David Dorf
    Today Oracle announced the next release of it's Big Data Appliance, an engineered system composed of hardware and software targeting the efficient processing of big data.  The solution leverages 288 Intel cores running Cloudera's distribution of Apache Hadoop in 1.1 TB of main memory.  This monster helps companies acquire, organize, and analyze large volumes of structured and un-structured data. Additionally a new versions of the Oracle Big Data Connectors and Oracle NoSQL Database were released. Why is this important to retailers?  As the infographic below conveys, mobile and social have added even more data to the already huge collections of POS transactions and e-commerce weblogs.  Retailers know that mining that data will help them make better decisions that lead to increased sales, better customer service, and ultimately a successful retail business. Monetate

    Read the article

  • Manual bootstrapping with Juju on VPS

    - by aaronfc
    I recently bought a VPS (OpenVZ) with Ubuntu 13.04. Lately I've read about Juju and I thought it would match my needs on my brand new VPS as I want to easily configure Hadoop, Wordpress, Graphite and other services. I tried the juju-local approach but I got an error while installing it: http://paste.ubuntu.com/6373143/ So I got to #juju IRC channel and I was told to use "Manual bootstrapping" and create a question here in order to get more info as it seems is something under development. Could anybody help me getting Juju to work on my VPS? Did any had the same problem? Thanks in advance! EDIT: After some more research (and after reading my own question) I think I should clarify a little more the situation. What I want is a manual provider to allow me manage my VPS from Juju. I know there might be a work in progress and I would appreciate any info regarding it's state and any possible solution :)

    Read the article

  • Microsoft : "Nous allons renouveler l'intégralité de notre gamme serveurs", entretien autour de SQL Server 2012

    [IMG]http://ftp-developpez.com/gordon-fowler/SQLServer2012.png[/IMG] Microsoft se montre très ambitieux pour SQL Server 2012 et sa Division Serveur et Plateforme. TCO très agressifs, simplification des licences, Big Data, BI, Clouds Privés, intégration de Hadoop, démocratisation de Azure, les sujets à aborder ne manquaient pas. Entretien avec Jérôme Trédan, Directeur des Produits Serveurs et Plateformes de Cloud Computing, Microsoft FranceDeveloppez.com : Pouvez-vous nous resituer la Division Serveur et Plateforme de Microsoft en quelques chiffres ? Jérôme Trédan : Cette branche de Microsoft regroupe trois grandes familles de produits : les technologies pour la réalisation de Cloud privés ...

    Read the article

  • Big Data Learning Resources

    - by Lara Rubbelke
    I have recently had several requests from people asking for resources to learn about Big Data and Hadoop. Below is a list of resources that I typically recommend. I'll update this list as I find more resources. Let's crowdsource this... Tell me your favorite resources and I'll get them on the list! Books and Whitepapers Planning for Big Data Free e-book Great primer on the general Big Data space. This is always my recommendation for people who are new to Big Data and are trying to understand it....(read more)

    Read the article

  • Winner of the 2012 Government Big Data Solutions Award

    - by Jean-Pierre Dijcks
    Hot off the press: The winner of the 2012 Government Big Data Solutions Aware is the National Cancer Institute!! Read all the details on CTOLabs.com. A short excerpt to wet your appetite: "... This solution, based on the Oracle Big Data Appliance with the Cloudera Distribution of Apache Hadoop (CDH), leverages capabilities available from the Big Data community today in pioneering ways that can serve a broad range of researchers. The promising approach of this solution is repeatable across many other Big Data challenges for bioinfomatics, making this approach worthy of its selection as the 2012 Government Big Data Solution Award." Read the entire post. Congrats to the entire team!!

    Read the article

  • Oracle R Enterprise 1.1 Download Available

    - by Sherry LaMonica
    Oracle just released the latest update to Oracle R Enterprise, version 1.1. This release includes the Oracle R Distribution (based on open source R, version 2.13.2), an improved server installation, and much more.  The key new features include: Extended Server Support: New support for Windows 32 and 64-bit server components, as well as continuing support for Linux 64-bit server components Improved Installation: Linux 64-bit server installation now provides robust status updates and prerequisite checks Performance Improvements: Improved performance for embedded R script execution calculations In addition, the updated ROracle package, which is used with Oracle R Enterprise, now reads date data by conversion to character strings. We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for R-related software: Oracle R Distribution, Oracle R Enterprise, ROracle, Oracle R Connector for Hadoop.  As always, we welcome comments and questions on the Oracle R Forum.

    Read the article

  • Converting large files in python

    - by Cenoc
    I have a few files that are ~64GB in size that I think I would like to convert to hdf5 format. I was wondering what the best approach for doing so would be? Reading line-by-line seems to take more than 4 hours, so I was thinking of using multiprocessing in sequence, but was hoping for some direction on what would be the most efficient way without resorting to hadoop. Any help would be very much appreciated. (and thank you in advance) EDIT: Right now I'm just doing a for line in fd: approach. After that right now I just check to make sure I'm picking out the right sort of data, which is very short; I'm not writing anywhere, and it's taking around 4 hours to complete with that. I can't read blocks of data because the blocks in this weird file format I'm reading are not standard, it switches between three different sizes... and you can only tell which by reading the first few characters of the block.

    Read the article

  • Java Certification Programs in India

    - by user2948246
    I am a current Masters student studying in the USA. My major is Computer Science. I did not get any summer internship and thus wanted to update my resume. I decided to come to india to do some courses. I want to know if a Java certification program from NIIT or APTECH holds any weightage in the US when i apply for my full time. I also plan on taking up a course on Hadoop and learn a couple of languages on my own.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >