Search Results

Search found 1815 results on 73 pages for 'percona xtradb cluster'.

Page 57/73 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Possible to register Selenium RC's with the Hudson Selenium Grid Hub w/o the RC's being slaves in th

    - by Rodreegez
    I am trying to get Hudson to run my ruby based selenium tests. I have installed the Selenium Grid plugin, but I don't want to have the RC's running as slaves in a Hudson cluster. The reason for this is I don't want to waste the next six years of my life trying to configure each of my projects in various Windows environments. Hudson currently pulls each project from Github and builds it just fine. With a regular Selenium Grid setup, I am able to edit the grid_configuration.yml file to represent the various environments I wish to tests against, then pass environment variables to the rake task that runs the test i.e. which browser/platfom to run on and the URL of the application under test -- usually a port on the hub machine running in a specific environment. In this way, the machines on which the RC's run don't need to know anything about the source code of my apps, they just need to have selenium-grid installed and have registered with the hub. Is there a way of elegantly emulating this with Hudson?

    Read the article

  • plotting results of hierarchical clustering ontop of a matrix of data in python

    - by user248237
    How can I plot a dendrogram right on top of a matrix of values, reordered appropriately to reflect the clustering, in Python? An example is in the bottom of the following figure: http://www.coriell.org/images/microarray.gif I use scipy.cluster.dendrogram to make my dendrogram and perform hierarchical clustering on a matrix of data. How can I then plot the data as a matrix where the rows have been reordered to reflect a clustering induced by the cutting the dendrogram at a particular threshold, and have the dendrogram plotted alongside the matrix? I know how to plot the dendrogram in scipy, but not how to plot the intensity matrix of data with the right scale bar next to it. Any help on this would be greatly appreciated.

    Read the article

  • Custom Icon for Marker Clusterer

    - by Nyxynyx
    I am using Marker Clusterer library for Google Maps API V3. Now that I have the clusterer working, I want to change the default icon to a custom one. Prorblem: When I try to set the style property of the marker clusterer, the default icon still appears. Where did I go wrong? JS Code // Marker Clusterer var styles = {styles: [{ height: 53, url: "http://localhost/mywebsite/images/template/markers/cluster.png", width: 53 }, { height: 56, url: "http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/images/m2.png", width: 56 }, { height: 66, url: "http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/images/m3.png", width: 66 }, { height: 78, url: "http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/images/m4.png", width: 78 }, { height: 90, url: "http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/images/m5.png", width: 90 }]}; var mcOptions = {gridSize: 50, maxZoom: 15, styles: styles[styles]}; mc = new MarkerClusterer(map, [], mcOptions);

    Read the article

  • Apache Cassandra overwhelming bandwidth overhead

    - by tanyehzheng
    while testing Apache Cassandra, I inserted 1000 rows of data. I allow it to propagate to the other machine on LAN. This is a 2 machine cluster. I monitor the network connection between the two machine. The total data I expected to flow between the two servers should be around 25Mb including all column names, column values and timestamps). But the actual data sent and received between them was an whopping 362Mb!! Anybody knows why is there such an overwhelming overhead? Thank you

    Read the article

  • Copy Structure To Another Program

    - by Steven
    Long story, long: I am adding a web interface (ASPX.NET: VB) to a data acquisition system developed with LabVIEW which outputs raw data files. These raw data files are the binary representation of a LabVIEW cluster (essentially a structure). LabVIEW provides functions to instantiate a class or structure or call a method defined in a .NET DLL file. I plan to create a DLL file containing a structure definition and a class with methods to transfer the structure. When the webpage requests data, it would call a LabVIEW executable with a filename parameter. The LabVIEW code would instantiate the structure, populate the structure from the data file, then call the method to transfer the data back to the website. Long story, short: How do you recommend I transfer (copy) an instance of a structure from one .NET program to a VB.NET program? Ideas considered: sockets, temp file, xml file, config file, web services, CSV, some type of serialization, shared memory

    Read the article

  • Why did my flash drive become "read only" and (how) can I fix it?

    - by Bob
    I have a brand new flash drive (one week old) that has become marked as read only, by Windows, Kubuntu and a bootable partitioner. Why did this happen? Is it fixable? If it is, how can I fix this? The problem Firstly, this drive is new. It's certainly not been used enough to die from normal wear and tear, though I would not discount defective components. The drive itself has somehow become locked in a read only state. Windows' Disk management: Diskpart: Generic Flash Disk USB Device Disk ID: 33FA33FA Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : Yes Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No What really confuses me is Current Read-only State : Yes and Read-only : No. Attempted solutions So far, I've tried: Formatting it in Windows (in Disk management, the format options are greyed out when right clicking). DiskPart Clean (CLEAN - Clear the configuration information, or all information, off the disk.): DISKPART> clean DiskPart has encountered an error: The media is write protected. See the System Event Log for more information. There was nothing in the event log. Windows command line format >format G: Insert new disk for drive G: and press ENTER when ready... The type of the file system is FAT32. Verifying 7740M Cannot format. This volume is write protected. Windows chkdsk: see below for details Kubuntu fsck (through VirtualBox USB passthrough): see below for details Acronis True Image to format, to convert to GPT, to destroy and rebuild MBR, basically anything: failed (could not write to MBR) Details (and a nice story) Background This was a brand new, generic, 8GB flash drive I wanted to create a multiboot flash drive with. It came formatted as FAT32, though oddly a little larger than most 8 GIGAbyte flash drives I've come across. Approximately 127MB was listed as "used" by Windows. I never discovered why. The end usable space was about what I normally expect from a 8GB drive (approx 7.4 GIBIbytes). I had thrown quite a few Linux distros on, along with a copy of Hiren's. They would all boot perfectly. They were put on with YUMI. When I tried to put the Knoppix DVD on, YUMI added an odd video option to its boot comman which caused Knoppix to boot with a black screen on X. ttys 1 through 6 still worked as text only interfaces. A few days later, I took some time to take that odd video option off, making the boot command match the one that comes with Knoppix. On the attempt to boot, Knoppix reported some form of LZMA corruption. Leading up to the current issue I was thinking the Knoppix files may have been corrupted somehow, so I tried reloading it. The drive was nearly full (45MB free), so I deleted a generic ISO that also was not booting. That went fine. I then went through YUMI to 'uninstall' Knoppix, i.e. delete files and remove from the menus. The files went first, then the menus were cleared successfully. However, the free space was stuck at about 700MB, same as it was before removing Knoppix. In the old Knoppix folder, there was a 0 byte file named KNOPPIX that could not be deleted. I tried reinserting the drive to delete this file - without safely removing, if that made a difference (hey, first time for everything). Running the standard Windows chkdsk scan without /r or /f reported errors found. Running with /r just got it stuck. I decided to give fsck a shot, so I loaded up my Kubuntu VM and attached the drive to it with VirtualBox's USB 2.0 passthrough. I umounted it (/dev/sda1) and ran a fsck. There are differences between boot sector and its backup. I chose No action. It told me FATs differ and asked me to select either the first or second FAT. Whichever I selected, I got a notice of Free cluster summary wrong. If I chose Correct, it gave a list of incorrect file names. To try to fix something, at least, I ran it with the -p option. Halfway through fixing the files, the VM froze - I ended its process about ten minutes later. Cause? My next attempt was to use YUMI, again, to rebuild the whole drive. I used YUMI's built in reformat (to FAT32) option and installed a Kubuntu ISO (700MB). The format was successful, however, the extract and copy of Kubuntu (which YUMI uses a 7zip binary for) froze at about 60% done. After waiting for about fifteen minutes (longer than the 3.5GB Knoppix ISO took last time), I pulled the drive out. The drive at this point was already formatted, SYSLINUX already installed, just waiting on the unpacking of an ISO and the modifying of the boot menus. Plugging it back in, it came up as normal - however, any write action would fail. Disk management reported it as read only. On reconnect, it would come up as normal but a write operation would cause it to go read only again. After a few attempts, it started coming up as read only on insertion. Attempts to fix This is when I ran through the attempts listed above, to try and reformat it in case of a faulty format. However the inability to do so even on a bootable disk indicated something more serious is wrong. chkdsk now reports nothing is wrong, and fsck still reports MBR inconsistencies, but now always chooses first FAT automatically after telling me FATs differ. It still does the same Free cluster summary wrong afterwards. I cannot run with -p anymore because it is now marked as read only. It also managed to corrupt my VM's disk somehow on the first attempt (yes, I'm sure I chose sda, which is mapped to a 7.4GB drive - I triple checked). Thank god for snapshots? I'm just about out of ideas. To my inexperienced mind it looks like something in the drive's firmware set it to read only "permanently" somehow - is there any way to reset this? I don't particularly care about keeping data, considering I've reformatted it twice. Also, fixes that keep me in Windows are better; it reduces the risk of me accidentally nuking my main hard drive. Update 1: I pulled apart the drive out of curiosity. As you can see, there are no obvious write protect switches. There is an IC on the other side, ALCOR branded labelled AU6989HL, if that matters. If there appears to be no way to fix this, I'll probably pull out the (glued down) card and put it in a card reader to check if it's the card or the controller that died. Update 2: I've pulled the card off, Windows detects the drive as a card reader now. The contacts on the card don't appear to be used, and there are several rows of holes on the card itself. Putting it into the card reader only detects about 30MB total, RAW. It's probably either the reader incorrectly reporting the card as faulty (as if a real SD card's write protect was switched on) or a bad contact somewhere. If nothing else, I have a spare 8GB Micro SD card now... as soon as I figure out how to format it as 8GB.

    Read the article

  • Identify cause of hundreds of AJP threads in Tomcat

    - by Rich
    We have two Tomcat 6.0.20 servers fronted by Apache, with communication between the two using AJP. Tomcat in turn consumes web services on a JBoss cluster. This morning, one of the Tomcat machines was using 100% of CPU on 6 of the 8 cores on our machine. We took a heap dump using JConsole, and then tried to connect JVisualVM to get a profile to see what was taking all the CPU, but this caused Tomcat to crash. At least we had the heap dump! I have loaded the heap dump into Eclipse MAT, where I have found that we have 565 instances of java.lang.Thread. Some of these, obviously, are entirely legitimate, but the vast majority are named "ajp-6009-XXX" where XXX is a number. I know my way around Eclipse MAT pretty well, but haven't been able to find an explanation for it. If anyone has some pointers as to why Tomcat may be doing this, or some hints on finding out why using Eclipse MAT, that'd be appreciated!

    Read the article

  • Weblogic server 10.0 - Managed Server shutting down

    - by Eqbal
    We have a weblogic server 10.0 instance which has a cluster with one managed server. Every Monday at 5am (or few seconds after), it shuts down on its own. The logs do not show any errors except the following message: JVM called WLS shutdown hook. The server will force shutdown now. JVM has a -Xnohup option and using JRockit. There is no cron job on the server. I am not sure how to debug this one. Admin Server keeps running with no issues and I am able to start the managed server back up with no problems. Any help is greatly appreciated. Thanks.

    Read the article

  • [java] Efficiency of while(true) ServerSocket Listen

    - by Submerged
    I am wondering if a typical while(true) ServerSocket listen loop takes an entire core to wait and accept a client connection (Even when implementing runnable and using Thread .start()) I am implementing a type of distributed computing cluster and each computer needs every core it has for computation. A Master node needs to communicate with these computers (invoking static methods that modify the algorithm's functioning). The reason I need to use sockets is due to the cross platform / cross language capabilities. In some cases, PHP will be invoking these java static methods. I used a java profiler (YourKit) and I can see my running ServerSocket listen thread and it never sleeps and it's always running. Is there a better approach to do what I want? Or, will the performance hit be negligible? Please, feel free to offer any suggestion if you can think of a better way (I've tried RMI, but it isn't supported cross-language. Thanks everyone

    Read the article

  • activemq round robin between queues or topics

    - by forkit
    I'm trying to achieve load balancing between different types of messages. I would not know in advance what the messages coming in might be until they hit the queue. I know I can try resequencing the messages, but I was thinking that maybe if there was a way to have the various consumers round robin between either queues or between topics, this would solve my problem. The main problem i'm trying to solve is that I have many services sending messages to one queue with many consumers feeding off one queue. I do not want one type of service monopolizing the entire worker cluster. Again I don't know in advance what the messages that are going to hit the queue are going to be. To try to clearly repeat my question: Is there a way to tell the consumers to round robin between either existing queues or topics? Thank you in advance.

    Read the article

  • ZooKeeper and RabbitMQ/Qpid together - overkill or a good combination?

    - by Chris Sears
    Greetings, I'm evaluating some components for a multi-data center distributed system. We're going to be using message queues (via either RabbitMQ or Qpid) so agents can make asynchronous requests to other agents without worrying about addressing, routing, load balancing or retransmission. In many cases, the agents will be interacting with components that were not designed for highly concurrent access, so locking and cross-agent coordination will be needed to avoid race conditions. Also, we'd like the system to automatically respond to agent or data center failures. With the above use cases in mind, ZooKeeper seemed like it might be a good fit. But I'm wondering if trying to use both ZK and message queuing is overkill. It seems like what Zookeeper does could be accomplished by my own cluster manager using AMQP messaging, but that would be hard to get really right. On the other hand, I've seen some examples where ZooKeeper was used to implement message queuing, but I think RabbitMQ/Qpid are a more natural fit for that. Has anyone out there used a combination like this? Thanks in advance, -Chris

    Read the article

  • NLB and Host Header Value

    - by Hafeez
    Background: We are using MOSS 2007 in farm configuration, 2 WFE, 1 Indexer and SQL Server. MS NLB is used for load balancing. Host header value mapped to Virtual IP of Cluster in DNS, is used while creating the web applications in MOSS and all are sharing port 80. Problem: When client tries to access the web application that are configured with host header values. Both of WFEs Hangs for 5 minutes, they stop responding to ping and browser shows 'Page not found'. In the Application Log on the WFE, this error is registered "provider: TCP Provider, error: 0 - The semaphore timeout period has expired". Interestingly, the web application with no host header value and hosted on different ports is working correctly. Any clue to solve this problem will be helpful. Thks. Hafeez

    Read the article

  • Hadoop on windows server

    - by Luca Martinetti
    Hello, I'm thinking about using hadoop to process large text files on my existing windows 2003 servers (about 10 quad core machines with 16gb of RAM) The questions are: Is there any good tutorial on how to configure an hadoop cluster on windows? What are the requirements? java + cygwin + sshd ? Anything else? HDFS, does it play nice on windows? I'd like to use hadoop in streaming mode. Any advice, tool or trick to develop my own mapper / reducers in c#? What do you use for submitting and monitoring the jobs? Thanks

    Read the article

  • Out of memory error while using clusterdata in MATLAB

    - by Hossein
    Hi, I am trying to cluster a Matrix (size: 20057x2).: T = clusterdata(X,cutoff); but I get this error: ??? Error using == pdistmex Out of memory. Type HELP MEMORY for your options. Error in == pdist at 211 Y = pdistmex(X',dist,additionalArg); Error in == linkage at 139 Z = linkagemex(Y,method,pdistArg); Error in == clusterdata at 88 Z = linkage(X,linkageargs{1},pdistargs); Error in == kmeansTest at 2 T = clusterdata(X,1); can someone help me. I have 4GB of ram, but think that the problem is from somewhere else..

    Read the article

  • MYSQL KEY-VALUE PAIR Viability

    - by Amit
    Hi, I am new to mysql and I am looking for some answers to the follwoing questions: a) Can mysql community server can be leveraged for a key-value pair type database.?? b) Which mysql engine is best suited for a key-value pair type database ?? c) Is Mysql cluster a must for horizontal scaling of key-value based datastore or can it be acheived using MySQL replication?? d) Are there any docs or whitepapers for best practices when implementiing a kv datastore on mysql?? e) Are there any known big implementations other that friendfeed doing kv pair using MYSQL?? Would really appreciate some advise from all you Mysql gurus out there !! Thanks In Advance, Amit

    Read the article

  • Getting clusters of rows close together in time

    - by Mike
    I have a table basically like so ID | ItemID | Start | End | --------------------------------------------------------------- 1 234 10/20/09 8:34:22 10/20/09 8:35:10 2 274 10/20/09 8:35:30 10/20/09 8:36:27 3 272 10/21/09 12:15:00 10/21/09 12:17:00 4 112 10/21/09 12:20:14 10/21/09 12:21:21 5 15 10/21/09 12:22:39 10/21/09 12:24:15 There are two "clusters" of entries here, 1-2 and 3-5 separated by a gap in time, specifically 30 minutes is what I'm interested in. What I would like is the first and last rows of the cluster of entries. This is fairly easy to achieve by retrieving all the rows and looping through them in order of start time, but I'd like to have it in SQL if possible. I'm using SQL Server 2008, thanks.

    Read the article

  • Essential skills of a Data Scientist

    - by harshsinghal
    I would like to know more about the relevant skills in the arsenal of a Data Scientist, and with new technologies coming in every day, how one picks and chooses the essentials. A few ideas germane to this discussion: Knowing SQL and the use of a DB such as MySQL, PostgreSQL was great till the advent of NoSql and non-relational databases. MongoDB, CouchDB etc. are becoming popular to work with web-scale data. Knowing a stats tool like R is enough for analysis, but to create applications one may need to add Java, Python, and such others to the list. Data now comes in the form of text, urls, multi-media to name a few, and there are different paradigms associated with their manipulation. What about cluster computing, parallel computing, the cloud, Amazon EC2, Hadoop ? OLS Regression now has Artificial Neural Networks, Random Forests and other relatively exotic machine learning/data mining algos. for company Thoughts?

    Read the article

  • Unable to run OpenMPI across more than two machines

    - by rcollyer
    When attempting to run the first example in the boost::mpi tutorial, I was unable to run across more than two machines. Specifically, this seemed to run fine: mpirun -hostfile hostnames -np 4 boost1 with each hostname in hostnames as <node_name> slots=2 max_slots=2. But, when I increase the number of processes to 5, it just hangs. I have decreased the number of slots/max_slots to 1 with the same result when I exceed 2 machines. On the nodes, this shows up in the job list: <user> Ss orted --daemonize -mca ess env -mca orte_ess_jobid 388497408 \ -mca orte_ess_vpid 2 -mca orte_ess_num_procs 3 -hnp-uri \ 388497408.0;tcp://<node_ip>:48823 Additionally, when I kill it, I get this message: node2- daemon did not report back when launched node3- daemon did not report back when launched The cluster is set up with the mpi and boost libs accessible on an NFS mounted drive. Am I running into a deadlock with NFS? Or, is something else going on?

    Read the article

  • Estimate serialization size of objects?

    - by Stefan K.
    In my thesis, I woud like to enhance messaging in a cluster. It's important to log runtime information about how big a message is (should I prefer processing local or remote). I could just find frameoworks about estimating the object memory size based on java instrumentation. I've tested classmexer, which didn't come close to the serialization size and sourceforge SizeOf. In a small testcase, SizeOf was around 10% wrong and 10x faster than serialization. (Still transient breaks the estimation completely and since e.g. ArrayList is transient but is serialized as an Array, it's not easy to patch SizeOf. But I could live with that) On the other hand, 10x faster with 10% error doesn't seem very good. Any ideas how I could do better?

    Read the article

  • infoWindow on MarkerClusterer

    - by vishwanath
    I need infoWindow to be opened instead of zooming in map, when clicking on the ClusterMarker. I am using Gmaps util library MarkerClusterer for creating cluster of markers. I tried changing following line in markerclusterer.js ClusterMarker_.prototype = new GOverlay(); with ClusterMarker_.prototype = new GMarker(); so that I can get the openInfoWindow() function in the clustermarker, but that didnt worked out. Got some error. If possible, Please suggest solution so that this can be done with MarkerClusterer. Or else any other library which will be able to do this. Any help will be appreciated.

    Read the article

  • Does anyone know a better alternative to MS Excel's Solver?

    - by tundal45
    My company has to crunch a lot of data and part of the process involves running the solver and plotting a graph through resulting data points. Obviously there is a lot of copy and paste involved and the whole process is shaky, error prone and all round cluster-fudge. I was wondering if there was an alternative to the solver that can be used so that even if we have to use excel to plot the final graph, there will be a lot less data that needs to be copied and pasted back and forth. It would be great especially if the tool could be easily integrated into a .NET application but I am open to suggestions that may require a little bit of code-fu to get this to work. Thanks!

    Read the article

  • Parallelism in Python

    - by fmark
    What are the options for achieving parallelism in Python? I want to perform a bunch of CPU bound calculations over some very large rasters, and would like to parallelise them. Coming from a C background, I am familiar with three approaches to parallelism: Message passing processes, possibly distributed across a cluster, e.g. MPI. Explicit shared memory parallelism, either using pthreads or fork(), pipe(), et. al Implicit shared memory parallelism, using OpenMP. Deciding on an approach to use is an exercise in trade-offs. In Python, what approaches are available and what are their characteristics? Is there a clusterable MPI clone? What are the preferred ways of achieving shared memory parallelism? I have heard reference to problems with the GIL, as well as references to tasklets. In short, what do I need to know about the different parallelization strategies in Python before choosing between them?

    Read the article

  • Windows/.NET Load Distribution & Balancing

    - by andrewbadera
    Hi all, Is there a vetted Windows-friendly, or even .NET-native, load-distributing/load-balancing utility out there along the lines of HA Proxy? We have a .NET stack product, and the one piece that we step out of the stack is for load-balancing. We need something with configurable rules for distribution -- perhaps subdomain-driven -- that NLB alone doesn't seem to offer. If it integrates directly with .NET, or offers an exposed API callable by webservices, so much the better! Thanks in advance! Clarification: we need to logically part over boxes. This is not just a cluster/failover/replication scenario.

    Read the article

  • Hadoop on Amazon EC2 : Job tracker not starting properly

    - by Algorist
    Hi, We are running Hadoop on Amazon EC2 cluster. We start the master, slaves and attach the ebs volumes and finally waiting for hadoop jobtracker, tasktracker etc to start and we have timeout of 3600 seconds. We are noticing 50% of the time that job tracker is not able to start before the timeout. Reason being, hdfs is not initialized properly and still in safemode and job tracker is unable to start. I noticed few connectivity issues between nodes on EC2 as I tried manually pinging slaves. Did anyone face similar issue and know how to solve this? Thank you Bala

    Read the article

  • Clojure / HBase: How to Import HBaseTestingUtility in v0.94.6.1

    - by David Williams
    In Clojure, if I want to start a test cluster using the hbase testing utility, I have to annotate my dependencies with: [org.apache.hbase/hbase "0.92.2" :classifier "tests" :scope "test"] First of all, I have no idea what this means. According to leiningens sample project.clj ;; Dependencies are listed as [group-id/name version]; in addition ;; to keywords supported by Pomegranate, you can use :native-prefix ;; to specify a prefix. This prefix is used to extract natives in ;; jars that don't adhere to the default "<os>/<arch>/" layout that ;; Leiningen expects. Question 1: What does that mean? Question 2: If I upgrade the version: [org.apache.hbase/hbase "0.94.6.1" :classifier "tests" :scope "test"] Then I receive a ClassNotFoundException Exception in thread "main" java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration Whats going on here and how do I fix it?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >