Search Results

Search found 1914 results on 77 pages for 'mongrel cluster'.

Page 49/77 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Using Hadoop, are my reducers guaranteed to get all the records with the same key?

    - by samg
    I'm running a hadoop job (using hive actually) which is supposed to uniq lines in a lot of text file. More specifically it chooses the most recently timestamped record for each key in the reduce step. Does hadoop guarantee that every record with the same key, output by the map step, will go to a single reducer, even if there are many reducers running across a cluster? I'm worried that the mapper output might be split after the shuffle happens, in the middle of a set of records with the same key.

    Read the article

  • Jboss logging issue - pl check this

    - by balaji
    I’m Working as deployer and server administrator. We use Jboss 4.0x AS to deploy our applications. The issue I'm facing is, Whenever we redeploy/restart the server, server.log is getting created but after sometime the logging goes off. Yes it is not at all updating the server.log file. Due to this, we could not trace the other critical issues we have. Actually we have two separate nodes and we do deploy/restarting the server separately on two nodes. We are facing the issue in both of our test and production environment. I could not trace out where exactly the issue is. Could you please help me in resolving the issue? If we have any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Below are the logs found in the stdout.log: 18:55:50,303 INFO [Server] Core system initialized 18:55:52,296 INFO [WebService] Using RMI server codebase: http://kl121tez.is.klmcorp.net:8083/ 18:55:52,313 INFO [Log4jService$URLWatchTimerTask] Configuring from URL: resource:log4j.xml 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFTraceLogger without a class name. 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFMessageLogger without a class name. 18:55:54,273 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheTraceLogger without a class name. 18:55:54,274 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheMessageLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCTraceLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCMessageLogger without a class name. 18:55:56,059 INFO [ServiceEndpointManager] WebServices: jbossws-1.0.3.SP1 (date=200609291417) 18:55:56,635 INFO [Embedded] Catalina naming disabled 18:55:56,671 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,672 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,843 INFO [Http11BaseProtocol] Initializing Coyote HTTP/1.1 on http-0.0.0.0-8180 18:55:56,844 INFO [Catalina] Initialization processed in 172 ms 18:55:56,844 INFO [StandardService] Starting service jboss.web Please help..

    Read the article

  • How fast are EC/2 nodes between each other?

    - by tesmar
    Hi, I am looking to setup Amazon EC/2 nodes on rails with Riak. I am looking to be able to sync the riak DBs and if the cluster gets a query, to be able to tell where the data lies and retrieve it quickly. In your opinion(s), is EC/2 fast enough between nodes to query a Riak DB, return the results, and get them back to the client in a timely manner? I am new to all of this, so please be kind :)

    Read the article

  • How to learn using Hadoop

    - by ajay
    hi, I want to learn hadoop. However, I don't have access to a cluster now. Is it possible for me to learn it and use it for writing programs and learn it properly. Would it be helpful to run multiple linux VMs and then use them as boxes to run hadoop? Or you think that is more of a stretch and running it on a multiple hosts is the same as running in on single host (in terms of setup, Hadoop API used, the architecture of the map-reduce programs etc) Thanks,

    Read the article

  • System call time out?

    - by Arnold
    Hi, I'm using unix system() calls to gunzip and gzip files. With very large files sometimes (i.e. on the cluster compute node) these get aborted, while other times (i.e. on the login nodes) they go through. Is there some soft limit on the time a system call may take? What else could it be?

    Read the article

  • What alternatives do I have if I want a distributed multi-master database?

    - by Jonas
    I will build a system where I want to reduce single-point-of-failures, and I need a database. Is there any (free) relational database systems that can handle multi-master setups good (i.e where it is easy to add and remove nodes) or is it better to go with a NoSQL-database? As what I have understood, a key-value store will handle this better. What database system do you recommend for a multi-master (cluster) setup?

    Read the article

  • Table clusters in SQLServer

    - by Bruno Martinez
    In Oracle, a table cluster is a group of tables that share common columns and store related data in the same blocks. When tables are clustered, a single data block can contain rows from multiple tables. For example, a block can store rows from both the employees and departments tables rather than from only a single table: http://download.oracle.com/docs/cd/E11882_01/server.112/e10713/tablecls.htm#i25478 Can this be done in SQLServer?

    Read the article

  • Multicore programming: what's necessary to do it?

    - by Casey
    I have a quadcore processor and I would really like to take advantage of all those cores when I'm running quick simulations. The problem is I'm only familiar with the small Linux cluster we have in the lab and I'm using Vista at home. What sort of things do I want to look into for multicore programming with C or Java? What is the lingo that I want to google? Thanks for the help.

    Read the article

  • using qsub (sge) with multi-threaded applications

    - by dan12345
    i wanted to submit a multi-threaded job to the cluster network i'm working with - but the man page about qsub is not clear how this is done - By default i guess it just sends it as a normal job regardless of the multi-threading - but this might cause problems, i.e. sending many multi-threaded jobs to the same computer, slowing things down. Does anyone know how to accomplish this? thanks. The batch server system is sge.

    Read the article

  • Running OpenMPI on Windows XP

    - by iamweird
    Hi there. I'm trying to build a simple cluster based on Windows XP. I compiled OpenMPI-1.4.2 successfully, and tools like mpicc and ompi_info work too, but I can't get my mpirun working properly. The only output I can see is Z:\orterun --hostfile z:\hosts.txt -np 2 hostname [host0:04728] Failed to initialize COM library. Error code = -2147417850 [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\openmpi-1.4.2 \orte\mca\ess\hnp\ess_hnp_module.c at line 218 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_plm_init failed -- Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\openmpi-1.4.2 \orte\runtime\orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed -- Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\..\..\openmpi -1.4.2\orte\tools\orterun\orterun.c at line 543 Where z:\hosts.txt appears as follows: host0 host1 Z: is a shared network drive available to both host0 and host1. What my problem is and how do I fix it? Upd: Ok, this problem seems to be fixed. It seems to me that WideCap driver and/or software components causes this error to appear. A "clean" machine runs local task successfully. Anyway, I still cannot run a task within at least 2 machines, I'm getting following message: Z:\mpirun --hostfile z:\hosts.txt -np 2 hostname connecting to host1 username:cluster password:******** Save Credential?(Y/N) y [host0:04728] This feature hasn't been implemented yet. [host0:04728] Could not connect to namespace cimv2 on node host1. Error code =-2147024891 -------------------------------------------------------------------------- mpirun was unable to start the specified application as it encountered an error. More information may be available above. -------------------------------------------------------------------------- I googled a little and did all the things as described here: http://www.open-mpi.org/community/lists/users/2010/03/12355.php but I'm still getting the same error. Can anyone help me? Upd2: Error code -2147024891 might be WMI error WBEM_E_INVALID_PARAMETER (0x80041008) which occures when one of the parameters passed to the WMI call is not correct. Does this mean that the problem is in OpenMPI source code itself? Or maybe it's because of wrong/outdated wincred.h and credui.lib I used while building OpenMPI from the source code?

    Read the article

  • Windows Azure: Parallelization of the code

    - by veda
    I have some matrix multiplication operation. I want to parallelize the execution of those operations through multiple processors.. This can be done on high performance computing cluster using MPI (Message Passing Interface). Like wise, can I do some parallelization in the cloud using multiple worker roles. Is there any means for doing that.

    Read the article

  • Ushahidi - How to make the markers stay on the map on zoom change?

    - by Guttemberg
    I am using the platform Ushahidi Web-2.7.3 , see: http://ti5.net.br/provedorlegal.com.br, and when I zoom in beyond a certain level, the clustered markers disappear from the map. I also tested this on an older version of a site, see: http://movimentofichalimpa.org/mapa, where the clustered markers do not disappear on zooming in, but just become ungrouped, as is normal with a cluster strategy. How can I make the markers remain on the map when zooming in?

    Read the article

  • Cassandra replication system - how it works

    - by inquisitor
    Does cassandra replicates only on write procedure (with choosen consistency level)? Is there any auto-replicate option for absent nodes, if i want symetric data in every node? If I plug in new node to cluster there is no auto replication - how to sync data from others nodes with new one? If I want replication like multimaster (2 nodes) with slave backup (1 node) known from MySQL, what is the proper logic setup and manage that on cassandra (3 nodes)? How about two nodes?

    Read the article

  • Nutch search always returns 0 results

    - by darbour
    I have set up nutch 1.0 on a cluster. It has been setup and has successfully crawled, I copied the crawl directory using the dfs -copyToLocal and set the value of searcher.dir in the nutch-site.xml file located in the tomcat directory to point to that directory. Still when I try to search I receive 0 results. Any help would be greatly appreciated.

    Read the article

  • Drawing a polygon around groups of datapoints in MATLAB

    - by Hossein
    Hi, I have a set of datapoints each of which belongs to a certain cluster(group).I need to draw a polygone around each of these clusters.Does anyone knows how to do it? PS: It doesn't matter if I use or not use the actual datapoints for drawing the polygon. I just need them to be wrapped in a polygon. Thanks

    Read the article

  • Erlang: What's a good way to automatically assign node names?

    - by mwt
    I want to have an EC2 based cluster that can grow and shrink at will. No node will be special in any way nor do I want them to have to coordinate their names with any other nodes. I don't want to hard code the names since I want to use one image and spin them up as needed. I understand nodes have to have names to communicate, though. What's a good strategy for automatically and dynamically coming up with a name at start script time?

    Read the article

  • Switch User in RedHat like XP [closed]

    - by rd42
    In our cluster, RedHat4 & 5 machines, if someone locks the computer and walks away no body can use it. Is there a feature in RedHat5, Gnome, KDE etc that would allow for the option of switching users at the lock screen, so more than one person can be logged in? Thanks, rd42

    Read the article

  • Splitting a list based on another list values in Mathematica

    - by Max
    In Mathematica I have a list of point coordinates size = 50; points = Table[{RandomInteger[{0, size}], RandomInteger[{0, size}]}, {i, 1, n}]; and a list of cluster indices these points belong to clusterIndices = {1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1}; what is the easiest way to split the points into two separate lists based on the clusterIndices values?

    Read the article

  • Changing the block size of a dfs file in Hadoop

    - by Sam
    I found that my map tasks is currently inefficient when parsing one particular set of files (total 2 TB). I'd like to change the block size of files in the Hadoop dfs (from 64MB to 128 MB). I can't find how to do it in the documentation for only one set of files and not the entire cluster, does anyone know the command that would change the block size when I upload it (ie copy from local to the dfs)? Thanks!

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >