Search Results

Search found 4043 results on 162 pages for 'mod cluster'.

Page 98/162 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Design application to send messages by marking circle on the map where you want to send message

    - by jhamb
    This is question asked to me by an interviewer, in which a map of world is given, and for those country you want to send message, just marked circle on that area, and just send to all the people comes in that area. Question visual link is : Design this application The approach that I told him: Firstly build whole person's data (contacts , place information and all) Then where you mark on the map, just build a cluster of that country using Hadoop and fire the message to all the person's contact comes in that cluster. So help me for better understandings of this problem, and if have another good approach (all back-end ad front-end) , then please tell me or discuss here with me. Thanks in advance.

    Read the article

  • TileEntitySpecialRenderer only renders from certain angle

    - by Hullu2000
    I'm developing a Minecraft mod with Forge. I've added a tileentity and a custom renderer for it. The problem is: The block is only visible from sertain angles. I've compaed my code to other peoples code and it looks pretty much like them. The block is opaque and not to be rendered and the renderer is registered normally so the fault must be in the renderer. Here's the renderer code: public class TERender extends TileEntitySpecialRenderer { public void renderTileEntityAt(TileEntity tileEntity, double d, double d1, double d2, float f) { GL11.glPushMatrix(); GL11.glTranslatef((float)d, (float)d1, (float)d2); HeatConductTileEntity TE = (HeatConductTileEntity)tileEntity; renderBlock(TE, tileEntity.getWorldObj(), tileEntity.xCoord, tileEntity.yCoord, tileEntity.zCoord, mod.EMHeatConductor); GL11.glPopMatrix(); } public void renderBlock(HeatConductTileEntity tl, World world, int i, int j, int k, Block block) { Tessellator tessellator = Tessellator.instance; GL11.glColor3f(1, 1, 1); tessellator.startDrawingQuads(); tessellator.addVertex(0, 0, 0); tessellator.addVertex(1, 0, 0); tessellator.addVertex(1, 1, 0); tessellator.addVertex(0, 1, 0); tessellator.draw(); } }

    Read the article

  • Do I need to manually create indexes for a DBIx::Class belongs_to relationship

    - by Dancrumb
    I'm using the DBIx::Class modules for an ORM approach to an application I have. I'm having some problems with my relationships. I have the following package MySchema::Result::ClusterIP; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster_ip'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->belongs_to( 'cluster' => 'MySchema::Result::Cluster', { 'foreign.config_key' => 'self.config_key', 'foreign.id' => 'self.cluster_id' } ); As well as package MySchema::Result::Cluster; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->has_many('cluster_ip' => 'MySchema::Result::ClusterIP', { 'foreign.config_key' => 'self.config_key', 'foreign.cluster_id' => 'self.id' }); There are a couple of other modules, but I don't believe that they are relevant. When I attempt to deploy this schema, I get the following error: DBIx::Class::Schema::deploy(): DBI Exception: DBD::mysql::db do failed: Can't create table 'test.cluster_ip' (errno: 150) [ for Statement "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENCES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`config_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB"] at test_deploy.pl line 18 (running "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENC ES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`conf ig_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB") at test_deploy.pl line 18 From what I can tell, MySQL is complaining about the FOREIGN KEY constraint, in particular, the REFERENCE to (config_key, id) in the cluster table. From my reading of the MySQL documentation, this seems like a reasonable complaint, especially in regards to the third bullet point on this doc page. Here's my question. Am I missing something in the DBIx::Class module? I realize that I could explicitly create the necessary index to match up with this foreign key constraint, but that seems to be repetitive work. Is there something I should be doing to make this occur implicitly?

    Read the article

  • Not sure I am using inheritance/polymorphism issue?

    - by planker1010
    So for this assignment I have to create a car class(parent) and a certifiedpreowned (child) and I need to have the parent class have a method to check if it is still under warranty. *checkWarrantyStatus(). that method calls the boolean isCoveredUnderWarranty() to veryify if the car still has warranty. My issue is in the certifiedpreowned class I have to call the isCoveredUnderWarranty() as well to see if it is covered under the extended warranty and then have it be called via the checkWarrantyStatus() in the car method. I hope this makes sense. So to sum it up I need to in the child class have it check the isCoveredUnderWarranty with extended warranty info. Then it has to move to the parent class so it can be called via checkWarrantyStatus. Here is my code, I have 1 error. public class Car { public int year; public String make; public String model; public int currentMiles; public int warrantyMiles; public int warrantyYears; int currentYear =java.util.Calendar.getInstance().get(java.util.Calendar.YEAR); /** construct car object with specific parameters*/ public Car (int y, String m, String mod, int mi){ this.year = y; this.make = m; this.model = mod; this.currentMiles = mi; } public int getWarrantyMiles() { return warrantyMiles; } public void setWarrantyMiles(int warrantyMiles) { this.warrantyMiles = warrantyMiles; } public int getWarrantyYears() { return warrantyYears; } public void setWarrantyYears(int warrantyYears) { this.warrantyYears = warrantyYears; } public boolean isCoveredUnderWarranty(){ if (currentMiles < warrantyMiles){ if (currentYear < (year+ warrantyYears)) return true; } return false; } public void checkWarrantyStatus(){ if (isCoveredUnderWarranty()){ System.out.println("Your car " + year+ " " + make+ " "+ model+ " With "+ currentMiles +" is still covered under warranty"); } else System.out.println("Your car " + year+ " " + make+ " "+ model+ " With "+ currentMiles +" is out of warranty"); } } public class CertifiedPreOwnCar extends Car{ public CertifiedPreOwnCar(int y, String m, String mod, int mi) { super(mi, m, mod, y); } public int extendedWarrantyYears; public int extendedWarrantyMiles; public int getExtendedWarrantyYears() { return extendedWarrantyYears; } public void setExtendedWarrantyYears(int extendedWarrantyYears) { this.extendedWarrantyYears = extendedWarrantyYears; } public int getExtendedWarrantyMiles() { return extendedWarrantyMiles; } public void setExtendedWarrantyMiles(int extendedWarrantyMiles) { this.extendedWarrantyMiles = extendedWarrantyMiles; } public boolean isCoveredUnderWarranty() { if (currentMiles < extendedWarrantyMiles){ if (currentYear < (year+ extendedWarrantyYears)) return true; } return false; } } public class TestCar { public static void main(String[] args) { Car car1 = new Car(2014, "Honda", "Civic", 255); car1.setWarrantyMiles(60000); car1.setWarrantyYears(5); car1.checkWarrantyStatus(); Car car2 = new Car(2000, "Ferrari", "F355", 8500); car2.setWarrantyMiles(20000); car2.setWarrantyYears(7); car2.checkWarrantyStatus(); CertifiedPreOwnCar car3 = new CertifiedPreOwnCar(2000, "Honda", "Accord", 65000); car3.setWarrantyYears(3); car3.setWarrantyMiles(30000); car3.setExtendedWarrantyMiles(100000); car3.setExtendedWarrantyYears(7); car3.checkWarrantyStatus(); } }

    Read the article

  • Gluster bricks are offline and errors in logs

    - by Roman Newaza
    I have substituted all the IP addresses with hostnames and renamed configs (IP to hostname) in /var/lib/glusterd by my shell script. After that I restarted Gluster Daemon and the volume. Then I checked if all the peers are connected: root@GlusterNode1a:~# gluster peer status Number of Peers: 3 Hostname: gluster-1b Uuid: 47f469e2-907a-4518-b6a4-f44878761fd2 State: Peer in Cluster (Connected) Hostname: gluster-2b Uuid: dc3a3ff7-9e30-44ac-9d15-00f9dab4d8b9 State: Peer in Cluster (Connected) Hostname: gluster-2a Uuid: 72405811-15a0-456b-86bb-1589058ff89b State: Peer in Cluster (Connected) I could see mounted volumes size change on all the nodes when I execute df command, so new data is coming. But recently I noticed error messages in app log: copy(/storage/152627/dat): failed to open stream: Structure needs cleaning readfile(/storage/1438227/dat): failed to open stream: Input/output error unlink(/storage/189457/23/dat): No such file or directory Finally, I have found out some bricks are offline: root@GlusterNode1a:~# gluster volume status Status of volume: storage Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick gluster-1a:/storage/1a 24009 Y 1326 Brick gluster-1b:/storage/1b 24009 N N/A Brick gluster-2a:/storage/2a 24009 N N/A Brick gluster-2b:/storage/2b 24009 N N/A Brick gluster-1a:/storage/3a 24011 Y 1332 Brick gluster-1b:/storage/3b 24011 N N/A Brick gluster-2a:/storage/4a 24011 N N/A Brick gluster-2b:/storage/4b 24011 N N/A NFS Server on localhost 38467 Y 24670 Self-heal Daemon on localhost N/A Y 24676 NFS Server on gluster-2b 38467 Y 4339 Self-heal Daemon on gluster-2b N/A Y 4345 NFS Server on gluster-2a 38467 Y 1392 Self-heal Daemon on gluster-2a N/A Y 1402 NFS Server on gluster-1b 38467 Y 2435 Self-heal Daemon on gluster-1b N/A Y 2441 What can I do about that? I need to fix it. Note: CPU and Network usage of all the four nodes are about the same.

    Read the article

  • Xen guests accessing LUNs

    - by mechcow
    We are using RHEL5.3 with a Clarion SAN attached by FC. Our situation is that we have a number of LUNs presented to Hosts and we want to dynamically present the LUNs to Xen Guests. We are not sure on what the best practice approach is to set this up. The Xen guests will form a cluster together and need the LUNs only for data partitions, i.e. when they are actively running services. So one approach would be to always present all disks to all Xen guests, and then rely up on the cluster software, and mount itself, to not mount the disk twice in two locations. This sounds kinda risky and also is not very secure (one cracked guest can see/destroy all the data). Another approach would be to dynamically add and remove the disks from the Xen guests at the dom0 level (using xm block-attach). This could work but sounds slightly complicated, I'm wondering whether Red Hat Cluster Suite supports this in some way or whether there are scripts to do this. Yet another approach would be to have the LUNs endpointed at the Xen guests themselves - I'm not sure whether this is technically possible since the multipathing has to be done at the Host level.

    Read the article

  • Ubuntu Server mdadm drbd ocfs2 kvm hangs under heavy file reading

    - by Stefano Annese
    I have deployed four ubuntu 10.04 server. They are coupled two by two in a cluster scenario. on both sides we have software raid1 disks, drbd8 and OCFS2 and on top of it some kvm machines run with qcow2 disks. I followed this: Link corosync is just used for DRBD and OCFS, the kvm machines are run "manually" When it works is fine: good performances, good I/O, but at a given time one of the two cluster started hanging. Then we tried with just one server turned on and it hangs the same. It seems to happen when an heavy READ in one of the virtual machines occurs, that is during rsyn backup. When the fact occurs the virtual machines are not reachable any more and the real server responds with good delay to the ping but no screen and no ssh is available. All we can do is force shutdown (hold the button) and restart and when it turns on again the raid on which relay drbd is resyncing. All the time it hangs we see such fact. After a couple of week of pain on one side this morning also the other cluster hung, but it has different moteherboard, ram, kvm instances. What is similar is reading for rsync scenario and Western Digital RAID Edistion disks on both side. Can anybody give me some input to solve such issue?

    Read the article

  • Reverse and Forward DNS set up correctly but sometimes MapReduce job fails

    - by phodamentals
    Ever since we switched over our cluster to communicate via private interfaces and created a DNS server with correct forward and reverse lookup zones, we get this message before the M/R job runs: ERROR org.apache.hadoop.hbase.mapreduce.TableInputFormatBase - Cannot resolve the host name for /192.168.3.9 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '9.3.168.192.in-addr.arpa' A dig and nslookup both show that the reverse and forward look-ups both get good responses with no errors from within the cluster. Shortly after these messages, the job runs...but every once in awhile we get a NPE: Exception in thread "main" java.lang.NullPointerException INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.net.DNS.reverseDns(DNS.java:93) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.reverseDNS(TableInputFormatBase.java:219) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:184) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1063) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1080) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:992) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:945) INFO app.insights.search.SearchIndexUpdater - at java.security.AccessController.doPrivileged(Native Method) INFO app.insights.search.SearchIndexUpdater - at javax.security.auth.Subject.doAs(Subject.java:415) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapreduce.Job.submit(Job.java:566) INFO app.insights.search.SearchIndexUpdater - at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596) INFO app.insights.search.SearchIndexUpdater - at app.insights.search.correlator.comments.CommentCorrelator.main(CommentCorrelator.java:72 Does anyone else who has set-up a CDH Hadoop cluster on a private network w/DNS server get this? CDH 4.3.1 with MR1 2.0.0 and HBase 0.94.6

    Read the article

  • install latest gcc as a non-privileged user

    - by voth
    I want to compile a program on a cluster (as a non-privileged user), which requires gcc-4.6, but the cluster has only gcc-4.1.2. I don't want to tell the administrator to update gcc, because 1) he is busy and would do it only after several days. 2) He probably wouldn't update it anyway, since other users may need the older gcc version (gcc is not backward compatible) I tried to compile gcc from source, which seems more difficult that it sounds, since it requires several other packages to be installed (GMP, MPFR, MPC, ...), and when I did it, after several hours I got a message like checking for __gmpz_init in -lgmp... no configure: error: libgmp not found or uses a different ABI (including static vs shared). at which point a got stuck. My question is: what is the easiest way to install the latest version of gcc as a non-privileged user? (something like apt-get install XXXXX, with an option to not install as root for example) The setup of the cluster is the following: CentOS release 5.4 (Final) Rocks release 5.3 (Rolled Tacos) If there are no other options than compiling from source, do you have any ideas how to handle the above error?

    Read the article

  • Upgrade SQLServer 2008 hardware

    - by John
    Forgive me if I'm not able to be totally clear here. It is not intentional, I'm a senior level developer in a very small company having to act like a manager at the moment. Anyway, the story is that we have 2 older dell servers with SQL Server 2008 Standard in a "cluster". I put that in quotes because I'm still not 100% clear what that means. We have 2 brand new blade servers and want to move the existing databases to the new hardware. Ok, so here is the gotcha. We need to do this with little or no down time. I'm being told that we can evict the passive node, then pull in one of the new servers. But I'm also being told that this is a dangerous step because something could go wrong that would cause the cluster to fail and then we would be left with nothing because the active server would not be able to come back up. Does anyone have any thoughts on how to handle this? I'm being told that the only way to ensure success is to have at least a day of down time where we bring up a new cluster on the new hardware and then migrate the databases 1 by 1.

    Read the article

  • SQL Server performance on VSphere 4.0

    - by Charles
    We are having a performance issue that we cannot explain with our VMWare environment and I am hoping someone here may be able to help. We have a web application that uses a databases backend. We have an SQL 2005 Cluster setup on Windows 2003 R2 between a physical node and a virtual node. Both physical servers are identical 2950's with 2x Xeaon x5460 Quad Core CPUs and 64GB of memory, 16GB allocated to the OS. We are utilizing an iSCSI San for all cluster disks. The problem is this, when utilizing the application under a repeated stress testing that adds CPUs to the cluster nodes, the Physical node scales from 1 pCPU to 8 pCPUs, meaning we see continued performance increases. When testing the node running Vsphere, we have the expected 12% performance hit for being virtual but we still scale from 1 vCPU to 4 vCPUs like the physical but beyond this performance drops off, by the time we get to 8 vCPUs we are seeing performance numbers worse than at 4 vCPUs. Again, both nodes are configured identically in terms of hardware, Guest OS, SQL Configurations etc and there is no traffic other than the testing on the system. There are no other VMs on the virtual server so there should be no competition for resources. We have contacted VMWare for help but they have not really been any suggesting things like setting SQL Processor Affinity which, while being helpful would have the same net effect on each box and should not change our results in the least. We have looked at all of VMWare's SQL Tuning guides with regards to VSphere with no benefit, please help!

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • Re-packaging commercial software into RPM packages

    - by gac
    The situation is this - I have a small CentOS 5 "cluster" (currently 7 machines, but potential for more) which run a commercially available software package that's distributed essentially in tarball format (it's actually a zip file with a mixture of Windows/Linux binaries and an installation shell script with no potential for automation). I'd like to re-package this somehow into an RPM package (ideally that I can throw onto a self-hosted yum repository) in order to keep these "cluster" machines both up to date and consistent. I could do 7 manual installations, but there's scope for error. As I understand it, I'll need to accomplish the following tasks: add a non-privileged user to the target system for running the daemon without unnecessary root privileges package the binary files themselves up from the final installation location on a separate build machine (probably under /opt/package for sanity's sake). No source is available. add a firewall hole in order for the end-users to be able to communicate with the "cluster" nodes add a cron task which can start the daemon on @reboot I'm coming up with plenty of good packaging resources so far, but all are based on the traditional method (i.e. if I were the vendor packaging up my source files), rather than re-packaging a ton of binary files from an already-installed instance of the application, which is the only option available to me. Anyone have any good resources they can share for achieving this goal? Thanks!

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • Redirect without changing URL

    - by Coobadivin
    Here's the setup. We have a hardware load balancer with an http virtual cluster. Let's call this virtual cluster example1.com. This virtual cluster load balances between two squid reverse proxies which are also on the same physical servers as the web servers. Squid listens on 80 and points to itself as the cache_peer web server which listens on 81. We also have a standalone web server which we will call example2.com. What we are trying to do is create a subdirectory on example1.com called example1.com/example2. This will point to example2.com, but we want our users to stay at example1.com/example2 in their browser. So, it's like a redirect without actually being a redirect. How the hell do I go about doing this? Is this even possible? I'm looking at squid docs in the meantime. example1.com is running a proprietary web server - not Apache :( We can't host example2.com's content in example1.com's file system. These are two very different platforms.

    Read the article

  • Installing Lubuntu 14.04.1 fails, upowerd appears to hang

    - by Rantanplan
    On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Thanks for some hints! On Lubuntu 12.04: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise upowerd appears to hang: Aug 25 10:53:28 lubuntu kernel: [ 367.920272] INFO: task upowerd:3002 blocked for more than 120 seconds. Aug 25 10:53:28 lubuntu kernel: [ 367.920288] Tainted: G S C 3.13.0-32-generic #57-Ubuntu Aug 25 10:53:28 lubuntu kernel: [ 367.920294] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 25 10:53:28 lubuntu kernel: [ 367.920300] upowerd D e21f9da0 0 3002 1 0x00000000 Aug 25 10:53:28 lubuntu kernel: [ 367.920314] e21f9dfc 00000086 f5ef7094 e21f9da0 c1050272 c1a8d540 c1920a00 00000000 Aug 25 10:53:28 lubuntu kernel: [ 367.920333] c1a8d540 c1920a00 d9e44da0 f5ef6540 c1129061 00000002 000001c1 0001c37b Aug 25 10:53:28 lubuntu kernel: [ 367.920351] 00000000 00000002 00000000 e2276240 00000000 00000040 c12b0ec5 c19975a8 Aug 25 10:53:28 lubuntu kernel: [ 367.920368] Call Trace: Aug 25 10:53:28 lubuntu kernel: [ 367.920389] [<c1050272>] ? kmap_atomic_prot+0x42/0x100 Aug 25 10:53:28 lubuntu kernel: [ 367.920404] [<c1129061>] ? get_page_from_freelist+0x2a1/0x600 Aug 25 10:53:28 lubuntu kernel: [ 367.920417] [<c12b0ec5>] ? process_measurement+0x65/0x240 Aug 25 10:53:28 lubuntu kernel: [ 367.920432] [<c1654c73>] schedule_preempt_disabled+0x23/0x60 Aug 25 10:53:28 lubuntu kernel: [ 367.920443] [<c16565bd>] __mutex_lock_slowpath+0x10d/0x171 Aug 25 10:53:28 lubuntu kernel: [ 367.920454] [<c1655aec>] mutex_lock+0x1c/0x28 Aug 25 10:53:28 lubuntu kernel: [ 367.920478] [<f857223a>] acpi_smbus_transaction+0x48/0x210 [sbshc] Aug 25 10:53:28 lubuntu kernel: [ 367.920489] [<c11858e1>] ? do_last+0x1b1/0xf60 Aug 25 10:53:28 lubuntu kernel: [ 367.920504] [<f857242f>] acpi_smbus_read+0x2d/0x33 [sbshc] Aug 25 10:53:28 lubuntu kernel: [ 367.920520] [<f881e0f1>] acpi_battery_get_state+0x74/0x8b [sbs] Aug 25 10:53:28 lubuntu kernel: [ 367.920535] [<f881e8a9>] acpi_sbs_battery_get_property+0x2a/0x233 [sbs] Aug 25 10:53:28 lubuntu kernel: [ 367.920549] [<c14fa61f>] power_supply_show_property+0x3f/0x240 Aug 25 10:53:28 lubuntu kernel: [ 367.920561] [<c114664f>] ? handle_mm_fault+0x64f/0x8d0 Aug 25 10:53:28 lubuntu kernel: [ 367.920573] [<c14fa5e0>] ? power_supply_store_property+0x60/0x60 Aug 25 10:53:28 lubuntu kernel: [ 367.920586] [<c1407d20>] ? dev_uevent_name+0x30/0x30 Aug 25 10:53:28 lubuntu kernel: [ 367.920597] [<c1407d38>] dev_attr_show+0x18/0x40 Aug 25 10:53:28 lubuntu kernel: [ 367.920608] [<c11dad15>] sysfs_seq_show+0xe5/0x1c0 Aug 25 10:53:28 lubuntu kernel: [ 367.920621] [<c119846e>] seq_read+0xce/0x370 Aug 25 10:53:28 lubuntu kernel: [ 367.920633] [<c11983a0>] ? seq_hlist_next_percpu+0x90/0x90 Aug 25 10:53:28 lubuntu kernel: [ 367.920644] [<c1179238>] vfs_read+0x78/0x140 Aug 25 10:53:28 lubuntu kernel: [ 367.920654] [<c11799a9>] SyS_read+0x49/0x90 Aug 25 10:53:28 lubuntu kernel: [ 367.920667] [<c165efcd>] sysenter_do_call+0x12/0x28 /var/log/installer/debug shows upower related error: Ubiquity 2.18.8 Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "overlay-scrollbar" ERROR:dbus.proxies:Introspect error on :1.23:/org/freedesktop/UPower: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. Exception in GTK frontend (invoking crash handler): Traceback (most recent call last): File "/usr/lib/ubiquity/bin/ubiquity", line 636, in <module> main(oem_config) File "/usr/lib/ubiquity/bin/ubiquity", line 622, in main install(query=options.query) File "/usr/lib/ubiquity/bin/ubiquity", line 260, in install wizard = ui.Wizard(distro) File "/usr/lib/ubiquity/ubiquity/frontend/gtk_ui.py", line 290, in __init__ mod.ui = mod.ui_class(mod.controller) File "/usr/lib/ubiquity/plugins/ubi-prepare.py", line 93, in __init__ upower.setup_power_watch(self.prepare_power_source) File "/usr/lib/ubiquity/ubiquity/upower.py", line 21, in setup_power_watch power_state_changed() File "/usr/lib/ubiquity/ubiquity/upower.py", line 18, in power_state_changed not misc.get_prop(upower, UPOWER_PATH, 'OnBattery')) File "/usr/lib/ubiquity/ubiquity/misc.py", line 809, in get_prop return obj.Get(iface, prop, dbus_interface=dbus.PROPERTIES_IFACE) File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 70, in __call__ return self._proxy_method(*args, **keywords) File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 145, in __call__ **keywords) File "/usr/lib/python3/dist-packages/dbus/connection.py", line 651, in call_blocking message, timeout)

    Read the article

  • Glassfish v2 alternatedocroot - will DAS sync it?

    - by ring bearer
    Using Sun Glassfish Enterprise server v2.1.1 I am using "alternatedocroot" via sun-web.xml for my web application to abstract out static content from actual deploy-able code (EAR/WAR) What I have is a cluster of two server instances distributed across two physical hosts - HOST1 and HOST2. "alternatedocroot" points to /data/static-content/ on both HOST1 and HOST2. Would DAS (Domain application server )take care of syncing /data/static-content between HOST1 and HOST2 if I use syncinstances=true option while starting up the cluster? Thanks!

    Read the article

  • linear algebra libraries for clusters

    - by Abruzzo Forte e Gentile
    Hi all I need to develop applications doing linear algebra + eigenvalue + linear equation solutions over a cluster of pcs ( I have a lot of machines available ). I discovered Scalapack libraries but they seem to me developed long time ago. Do you know if these are other libs available that I should learn doing math & linear algebra in a cluster? My language is C++ and off course I am newbie to this topic. Kind Regards to everybody AFG

    Read the article

  • alternatedocroot

    - by ring bearer
    Using Sun Glassfish Enterprise server v2.1.1 I am using "alternatedocroot" via sun-web.xml for my web application to abstract out static content from actual deploy-able code (EAR/WAR) What I have is a cluster of two server instances distributed across two physical hosts - HOST1 and HOST2. "alternatedocroot" points to /data/static-content/ on both HOST1 and HOST2. Would DAS (Domain application server )take care of syncing /data/static-content between HOST1 and HOST2 if I use syncinstances=true option while starting up the cluster? Thanks!

    Read the article

  • How to make multiple instances of RCVR, RQSTR and CLUSRCVR channels in WMQ?

    - by Dr. Xray
    This is a follow up on the question below, but it deserves another question. http://stackoverflow.com/questions/1821514/are-server-conn-and-client-conn-channels-the-only-channels-that-could-have-more-t To my understanding, a receiver (or cluster receiver) channel usually pair up with a single sender (or cluster sender) channel. How can one side being single instance while the other side being multiple instances? Thanks.

    Read the article

  • MPIexec.exe Access denide

    - by shake
    I have installed microsoft compute cluster and MPI.net, now i have trouble to run program using mpiexec.exe on cluster. When i try to run it on console i get message: "Access Denied", and pop up: "mpiexec.exe is not valid win32 application". I tried google it, but found nothing. Pls help. :)

    Read the article

  • ViewState MAc Problem...

    - by Mitesh
    Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster.

    Read the article

  • Creating a wine static executable?

    - by asdf
    Hi, I have some windows command line applications, in binary form (I do not have the source code) which I use frequently. Sometimes I need to run them in Linux machines, and it works perfectly under wine (wine is not an emulator). The problem I'm facing now is that I need to work on a cluster which has not wine installed on it. I wonder if it is possible to create in another similar linux machine kind of a static executable or so, so i can run this windows program on the cluster Thanks

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >