Search Results

Search found 29093 results on 1164 pages for 'oracle mysql cluster'.

Page 517/1164 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • Do I need to manually create indexes for a DBIx::Class belongs_to relationship

    - by Dancrumb
    I'm using the DBIx::Class modules for an ORM approach to an application I have. I'm having some problems with my relationships. I have the following package MySchema::Result::ClusterIP; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster_ip'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->belongs_to( 'cluster' => 'MySchema::Result::Cluster', { 'foreign.config_key' => 'self.config_key', 'foreign.id' => 'self.cluster_id' } ); As well as package MySchema::Result::Cluster; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->has_many('cluster_ip' => 'MySchema::Result::ClusterIP', { 'foreign.config_key' => 'self.config_key', 'foreign.cluster_id' => 'self.id' }); There are a couple of other modules, but I don't believe that they are relevant. When I attempt to deploy this schema, I get the following error: DBIx::Class::Schema::deploy(): DBI Exception: DBD::mysql::db do failed: Can't create table 'test.cluster_ip' (errno: 150) [ for Statement "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENCES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`config_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB"] at test_deploy.pl line 18 (running "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENC ES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`conf ig_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB") at test_deploy.pl line 18 From what I can tell, MySQL is complaining about the FOREIGN KEY constraint, in particular, the REFERENCE to (config_key, id) in the cluster table. From my reading of the MySQL documentation, this seems like a reasonable complaint, especially in regards to the third bullet point on this doc page. Here's my question. Am I missing something in the DBIx::Class module? I realize that I could explicitly create the necessary index to match up with this foreign key constraint, but that seems to be repetitive work. Is there something I should be doing to make this occur implicitly?

    Read the article

  • Sending USR2 to mongrel_rails sometimes results in an “Address already in use” on the restart

    - by Ben
    We have a rolling-restart mode for our mongrel cluster that sends a USR2 signal to each running process. This works great, most of the time. But very occasionally the mongrel process will shutdown, and then fail to restart, with the following error: /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize_without_backlog': Address already in use - bind(2) (Errno::EADDRINUSE) from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `listener' Looking though the mongrel source, the USR2 handler calls a synchronous stop on the running server, so it ought to block until the socket has been released. Has anyone seen this error? Does anyone have any ideas what might cause it? (I asked this question over on StackOverflow initially, but thought it might be more appropriate here)

    Read the article

  • PBS batch jobs - the qalter command

    - by Ryan Budney
    I've got a giant computation running on a Scientific Linux cluster. At present I have over 600 jobs parked in the queue, waiting for processor time, while a few are running. I'm trying to use the qalter command on some of the idle but scheduled jobs. I'd like to schedule them for a later time, so that other users can jump part of the queue, sort of as an act of politeness. Is this doable? For example, JOBNAME 292399 is currently idle, scheduled to be run whenever a spot in the queue opens up. But if I run qalter -a 10051000 292398 followed by qrerun 292398 I get qrerun: Request invalid for state of job 292398.euler. From the qalter documentation, I thought 10051000 refers to tomorrow (oct 5th, 10am) but perhaps I'm misunderstanding something? If I'm going about this the wrong way, please let me know. The main thing I'm looking for is a command that's easily scriptable, so that I can modify when my queued tasks get run. qalter seems good for those purposes if I can get it working. I'd rather avoid running qdel and re qsubbing the computations, as there's a bookkeeping issue on which tasks to restart (vs which ones not to). I want to avoid that kind of bookkeeping. From googling around I notice some qalter commands have rather different date formats, but the above appears to be correct, as far as I can tell from the man docs. Any help would be appreciated.

    Read the article

  • Pitfalls to using Gluster as a home/profile directory server?

    - by Bart Silverstrim
    I was asking recently about options for divvying up access to file servers, as we have a NAS solution that gets fairly bogged down when our users (with giant profiles, especially) all log in nearly simultaneously. I ran across Gluster and it looks like it can cluster different physical storage media into a single virtual volume and share it out like a virtual NAS from the client perspective and it support CIFS. My question is whether something like this would be feasible to use for home and profile directories in an active directory environment. I was worried about ACL's, primarily, as I didn't think CIFS was fine-grained enough to support NTFS permissions and it didn't look like Gluster exports those permission levels, just the base permissions for basic file sharing. I got the impression that using Gluster would allow for data to be redundant across multiple servers and would speed up access to the files under heavy load, while allowing us to dynamically boost storage capacity by just adding another server and telling Gluster's master node to add that server. Maybe I'm wrong with my understanding of it though. Anyone else use it or care to share how feasible this is?

    Read the article

  • Built local glibc, broke system, how do I ssh without parsing the .bashrc?

    - by Mikhail
    The cluster I am on had really old build tools and I needed to use CUDA5. I'm a pretty clever dude and I planned on building the necissary tools. So, I built a local copy of gcc, bintools, and glibc. Everything a CUDA5 could want. All builds finished without error. and I tested gcc and bintools. Everything was wonderful and I built and ran a few of the programs. I set up the LD_LIBRARY_PATHs in the .bashrc and logged back in, expecting a productive night ahead. To my horror I realized that everything is dynamically linked. Now I can't do simple commands like ls [ex@uid377 ~]$ ls ls: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument and I can't do commands to fix the problem like rm or vim! Is there a way for me to ssh but also to ignore .bashrc file? Any suggestions are much appreciated. This machine is obviously under maintained and I don't know when I could have administrator support.

    Read the article

  • Cannot increase Datastore

    - by k4w4zz
    Hello, We have an ESX 4.0 cluster with 2 hosts, EMC Clarion SAN storage with 10 LUNs. We have added 2 new 400 GB LUNs. All the LUNs are visible from both hosts. I have extended an existing 500 GB datastore with one of these 400 GB LUNs - the new datastore size is now 900 GB. I'd like to do the same operation with the second 400 GB LUN to extend another existing datastore but I'm not able to do it. The LUN is available to create a brand new datastore but is not visible to extend an existing one. I don't understand why everything was fine with the other one and why can't I do the same exact operation with this LUN. The result is the same on both hosts. The SAN admin have erased and re-created several times this LUN. I have rescan the HBA each time. In attachment you can find the result of the esxcfg-mpath -l and fdisk -l commands on both servers. Does somebody have an idea please?

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Determining physical location of data on a disc

    - by Synetech
    Does anybody know of a way to find out where, physically on a CD or DVD a given piece of data would be located? I am trying to watch a DVD at the moment, and am about half-way through, but it keeps dying at a specific spot in the film, presumably because of a scratch. I have a repair kit, but I don’t know where to focus my repair because there are several scuffs and scratches on the disc and I have no way of knowing which one is causing the issue. Obviously, cleaning all of them is inadvisable because not only does it waste the consumable materials in the kit, but not all of them are a problem, and by working them, some may become unreadable. Moreover, just because I am half-way through the movie does not mean that it would be half-way from the hub to the edge for several reasons: Discs have more data towards the outer edge than the inner edge (circles are more mathematically complicated than rectangles) The disc is not completely filled up (and even if it were, the movie itself would be be using it all, there are extras and such) Because in this particular case it is a commercial DVD, it is also dual-layer which further complicates manual determination As such, I am trying to find a program that can let me identify a file (or part thereof), cluster, etc. and show me a picture of where on the CD/DVD it would be located. That way, I can look at the disc and fix any scratches that correspond to that distance from the hub. For example, the image below might indicate where on a disc a couple of files or range of clusters would be located, so by looking for anomalies in those areas (rotating as necessary), the correct one can be identified. I’m sure it can be done since at least one form of copy protection (DPM) uses it and DVD-lab Pro includes a “DVD Topology” feature to do this.

    Read the article

  • thought about shared storage (NFS, Lustre) [closed]

    - by user134880
    Possible Duplicate: Can you help me with my capacity planning? Now I habe small cluster with total of 8 nodes. 6 of them are computing nodes (apache and vmware) and 2 nodes are for storage. 2 storage nodes are identical. Each storage server is linux box with 8 x 1Tb WD RE4 in soft raid 10. 1st box is master and 2nd is slave. Data is mirrored with DRDB. We export NFSv4 shares to Apache (for document root) and iSCSI to Vmware. Now all is working pretty good and stable. But it will be soon time to upgrade our system. I have been thinking of Lustre. Does some one has any real experience with Lustre or NFS medium clusters? Will it be good idea just to upgrade server and change hdd's to 3Tb ? With NFS we will always have only 2 servers to maintain (one primary and one slave). Thanks. QUESTIONS: 1) Does some one used Lustre? In production? I have seen a lot of info about how it is hard to setup Lustre because you need to compile own kernel and patches. It's answers from newbies. Is there some one who has used Lustre for some period of time? 2) About disk upgrades - it's only description of strategy. I'm not asking if it is enough 3Tb or not. I just ask if it is right just to replace hdds instead of adding new server (like with Lustre) Thanks again.

    Read the article

  • nginx tmp file folder runing out of diskspace

    - by user1179459
    I get mysql diskspace error Can't create/write to file '/tmp/#sql_777_0.MYI' (Errcode: 28) mainly because my ngnix server is writing file into the tmp folder which doesn't get clean up.. i added this command as per instructions on the nginx manual to the crontab but doesn't seems to be doing the trick, (i don't understand what it does too) 0 */1 * * * /usr/sbin/tmpwatch -am 1 /tmp/nginx_client then i had to do this commands mannually cd /tmp/nginx_client find -name * | xargs rm i need to know what should i do to automate this clean up ? is there way to increase the /tmp/ - /var/tmp/ size without reformatting or doing any dangerous things ? Can i change the location of the MYSQL - TMP files ?

    Read the article

  • Problem with Apache webserver and MySQL after installing cPanel

    - by Santhosh S
    I have a Fedora machine with Apache webserver and MySQL installed with some data present in both the servers. I installed cPanel on this machine with a test license. After installing cPanel I am not able to start Apache Webserver and login to MySQL. I am getting the following error when I try to start the webserver Building global cache for cpanel...warn [Cpanel::AdvConfig::apache::modules] Unable to locate executable Apache binary warn [Cpanel::AdvConfig::apache::modules] Unable to locate executable Apache binary

    Read the article

  • Issue with resetting auto increment from default to big number

    - by Sai Srikanth
    I have a MySQL table naming Invoice for a Inventory Monitoring site, invoice_number is bigint(19) AUTO_INCREMENT field. Currently AUTO_INCREMENT value is 1. Client want it to start the invoice_number from 50000. With the following script reset the ALTER TABLE INVOICES AUTO_INCREMENT = 50000; When I wrote an Insert Script to insert data in SQLDBX, it is putting the invoice_number from 50000. But when i am trying to do insert a record using the application(web application), the invoice_number value is starting from 1. We are making use of Spring-JDBC template to insert data into mysql database.

    Read the article

  • What applications can be used in a Red Hat/CentOS cluster?

    - by Sandra
    Hi, When I look at the Red Hat cluster manuals 1 2, they only explain how to install it but not what applications can use it. I am new to clusters, so I don't know these things =) Let's say I want to 3 node high performance cluster; What applications would work with it? Also, how does an application talk to the cluster? Does the application need to have been written to support clusters? Sandra

    Read the article

  • How to set password for phpMyAdmin with default install of WAMP2?

    - by Danjah
    I just installed it fresh, previously there would be no password at all and I'd be prompted to set one for security purposes, this time I'm just hit with an error: "Cannot connect: invalid settings. phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server." I've changed nothing, but I did install over the top of an older version of WAMP2. That older install did not have a root password. Please don't hit me, but if there's a text fie I can update somewhere instead of getting a cmd line action wrong... I'd really like to be pointed in that direction. I've just tried editing 'my.ini' and uncommented the password line out, so password is blank, but no luck. I don't mind setting a password and such, but right now I just want to get up and running, password or no.

    Read the article

  • Where to start? Server profiling tools

    - by Dave Chenell
    So I have a website, users upload images, people view images etc. Simple enough. Lots of writes and reads everywhere. Its built in MySQL and PHP on Apache. We have one dedicated server. Right now we are getting hammered with unexpected press and users and our CPU cycle is out of control, slowing down the entire site. Off hand I am guessing its either the saving of the image to the disk, or the Mysql queries. What tools can I use to educate myself on what is happening and pinpoint the problem? How can I find out what is happening in detail in this time of crisis?

    Read the article

  • Downloading Database dump from server

    - by Ctroy
    I have a mysql database on a server that is around 4 gb in size and I couldn't get it downloaded to my local machine. I tried getting a dump on the server, but the dump is not getting created probably because of the big size. Is there any way, I could get the dump downloaded on my local machine? I can syncing using Sqlyog but I know it will take ages. Is there a way, I can get a dump created on the server? By the way, my server is a linux server and its running php/mysql.

    Read the article

  • http://localhost:8080 is not working on running apacher Server through XAMPP

    - by Dinesh Kumar
    I am unable to configure a PHP/MySQL/Apache environment on my local machine using XAMPP. In my xampp control panel the Apache and MySQL status is as started, but localhost is not working. I have tried the following method but still i am unable to execute: I tried to reinstall after uninstalling several time. I changed the "Listen 80" to Listen localhost:80, localhost:8080, 127.0.0.1:80,127.0.0.1:8080, MySystemIP:80/8080 and tried to execute the relevant url in my browser. I executed the netstat -noa and tasklist and found Apache is is using Port80. Skype is also not using the port80. I edited the firewall to allow port80. Also disabled the firewall. Is there any other solution to fix my problem. I remember 8 months back everything worked properly. Now when i try to reinstall xampp the issue raised it. I am Using Win-xp os. Thanks in advance

    Read the article

  • Memory usage on debian webserver keeps going up

    - by Steven De Groote
    my webserver is running apache 1.3.x for a PHP application, along with mysql on the same machine. Most of the time it runs fine, CPU usage still with nice margin, but somehow memory usage keeps growing throughout uptime. While it looks like it is chunked from time to time, I've had moments my server going down because it's out of memory. Restarting apache or mysql only reduced memusage by 100M. Attached is an overview of monthly memory usage. The 2 massive drops are server restarts after out-of-memory situations. http://imageshack.us/photo/my-images/51/memorymonth.png/ Any explanations for his behaviour or how I could solve this? Thanks! Steven

    Read the article

  • Change cPanel MySQL User maximum character value

    - by David
    We are moving our websites to a new server, and in the process, have found that our current version of cPanel 11.24 doesn't have a maximum number of characters for a MySQL Database Username, like cPanel 11.25 does (it's set to 7). However, we have MySQL users whose names are longer than 7 characters. 1) Will something disastrous happen if we upgrade cPanel on our current server? 2) How can I change this setting in the new version of cPanel / WHM?

    Read the article

  • Raspberry Pi: mvoe home & var folders to USB HDD?

    - by doni49
    I have a Raspberry Pi running raspian. I've installed Apache2, PHP & MySQL. Apache & MySQL are both configured to use sub-directories of /var. I'd like to use a directory on my USB HDD instead of my SD card. I'd also like to move the /home directory to the USB HDD. I'd like to avoid re-partioning my HDD if possible. I thought maybe I could use a symlink to tell raspian that /var is really at /media/USBHDD1/var and /home is really at /media/USBHDD1/home. I tried it last night but couldn't get it to work.

    Read the article

  • Automatically reconnect to ODBC sources?

    - by stefan.at.wpf
    I am using Asterisk 1.8.10.1 and a MySQL database connected via ODBC to store CDRs. When my MySQL database isn't available when Asterisk starts or has an outage while Asterisk is running, I would expect Asterisk to retry to connect to the database, but this doesn't happen! Anyone knows where I can enable some kidn of automatic reconnect to databases in Asterisk? My res_odbc.conf looks like this: [asterisk] enabled => yes dsn => asterisk-connector username => user password => pass pre-connect => yes pooling => no limit => 1 idlecheck => 1 negative_connection_cache => 1

    Read the article

  • How to recover a server from a tar file

    - by Mitch
    In moodle the LMS you can export courses, as a tar.gz, some one said they were going to give me such a thing. I was suprised by the 6 gb size. I was even more suprised when I extracted it, and found the root directory to be the root of the server. The person giving me the course instead of exporting must have just tarred the entire server!! How should I go about recovering this? Is there anyway to start this up in a virtual machine? I have a whole linux server, what to do? I could probably just hand pick the data files I need, but how to access a mysql database with out running mysql? I am so stumped!

    Read the article

  • Database modularity with EBS volumes

    - by Eclyps19
    I would like to add modularity to my websites on EC2 instances by encapsulating the site files and the mysql files in their own EBS volumes. The end result that I'm going for is the ability to quickly mount a volume or two to different servers running the same AMI (for testing/development/emergency maintenance, etc), as well as maintain separate snapshots of each. I'm able to do this fairly easily with a single database by symlinking my mounted database EBS to the appropriate places (/var/lib/mysql, /etc/my.cnf, /var/log/mysqld.log), but I'm not sure if it would even be possible be possible to have multiple databases on different EBS volumes running concurrently. Example: /website1/www.website.com /database1/ /website2/www.otherwebsite.com /database2/ Could anybody shed some light on this for me? Is it possible? Is it a bad idea? Thanks.

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >