Search Results

Search found 1914 results on 77 pages for 'mongrel cluster'.

Page 51/77 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Adding a column to a model at runtime (without additional tables) in rails

    - by Marek
    I'm trying to give admins of my web application the ability to add some new fields to a model. The model is called Artwork and i would like to add, for instante, a test_column column at runtime. I'm just teting, so i added a simple link to do it, it will be of course parametric. I managed to do it through migrations: def test_migration_create Artwork.add_column :test_column, :integer flash[:notice] = "Added Column test_column to artworks" redirect_to :action => 'index' end def test_migration_delete Artwork.remove_column :test_column flash[:notice] = "Removed column test_column from artworks" redirect_to :action => 'index' end It works, the column gets added/ removed to/from the databse without issues. I'm using active_scaffold at the moment, so i get the test_column field in the form without adding anything. When i submit a create or an update, however, the test_column does not get updated and stay empty. Inspecting the parameters, i can see: Parameters: {"commit"=>"Update", "authenticity_token"=>"37Bo5pT2jeoXtyY1HgkEdIhglhz8iQL0i3XAx7vu9H4=", "id"=>"62", "record"=>{"number"=>"test_artwork", "author"=>"", "title"=>"Opera di Test", "test_column"=>"TEEST", "year"=>"", "description"=>""}} the test_column parameter is passed correctly. So why active record keeps ignoring it? I tried to restart the server too without success. I'm using ruby 1.8.7, rails 2.3.5, and mongrel with an sqlite3 database. Thanks

    Read the article

  • Why is Apache + Rails is spitting out two status headers for code 500?

    - by Daniel Beardsley
    I have a rails app that is working fine except for one thing. When I request something that doesn't exist (i.e. /not_a_controller_or_file.txt) and rails throws a "No Route matches..." exception, the response is this (blank line intentional): HTTP/1.1 200 OK Date: Thu, 02 Oct 2008 10:28:02 GMT Content-Type: text/html Content-Length: 122 Vary: Accept-Encoding Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Status: 500 Internal Server Error Content-Type: text/html <html><body><h1>500 Internal Server Error</h1></body></html> I have the ExceptionLogger plugin in /vendor, though that doesn't seem to be the problem. I haven't added any error handling beyond the custom 500.html in public (though the response doesn't contain that HTML) and I have no idea where this bit of html is coming from. So Something, somewhere is adding that HTTP/1.1 200 status code too early, or the Status: 500 too late. I suspect it's Apache because I get the appropriate HTTP/1.1 500 header (at the top) when I use Webrick. My production stack is as follows: Apache 2 Mongrel (5 instances) RubyOnRails 2.1.1 (happens in both 1.2 and 2.1.1) I forgot to mention, the error is caused by a "no route matches..." exception

    Read the article

  • Cannot log to Proxmox GUI

    - by greg
    I'm running proxmox on debian 6.0.8 kernel 2.6.32-18-pve. When I try to log into the GUI with the root password, it lets me in for about 2 seconds then asks for the password again: <hostname>:8006/api2/json/cluster/resources 401 (permission denied - invalid ticket) This is not the same behavior as when I give an incorrect password. I suspect a hostname problem, but I can't sort it out. My /etc/hosts contains the following line: <ip> <shorthostname> <longhostname> pvelocalhost The folder /etc/pve/nodes contains a folder name The https certificates matches the hostname. Any idea? TIA greg

    Read the article

  • SMB shared folder error when creating additional share on our SAN

    - by jherlitz
    Okay, we have a SAN using Failover Cluster Management on a pair of 2008 servers. We created shares on here before and they are usable. Now when I go to create a new share I get the following error message: "Flags for the SMB Shared folder cannot be configured. This shared resource does not exist" Does not allow me to create the share then. Haven't been able to find any good docs out there to help me through this error. Any advice would be appreciated.

    Read the article

  • Cannot delete unknown (orphaned) vm from ESX host inventory

    - by Eire09
    Hi, I'm having problems delete an unknown (orphaned) VM from an ESX 3.5 host. When I attempt to right click the VM I get the following error "Object reference not set to an instance of an object". Steps taken so far. 1. Removed the host from the cluster 2. Removed the host from vCenter 3. Rebooted the host 4. Edited the file vmInventory.xml & cleared the file 5. Restarted services - mgmt-vmware restart Can anyone think of anything else that I can do to resolve this issue? Thanks guys.

    Read the article

  • "power limit notification" clobbering on 12G Dell servers with RHEL6

    - by Andrew B
    Server: Poweredge r620 OS: RHEL 6.4 Kernel: 2.6.32-358.18.1.el6.x86_64 I'm experiencing application alarms in my production environment. Critical CPU hungry processes are being starved of resources and causing a processing backlog. The problem is happening on all the 12th Generation Dell servers (r620s) in a recently deployed cluster. As near as I can tell, instances of this happening are matching up to peak CPU utilization, accompanied by massive amounts of "power limit notification" spam in dmesg. An excerpt of one of these events: Nov 7 10:15:15 someserver [.crit] CPU12: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU6: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU14: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Package power limit notification (total events = 11) Nov 7 10:15:15 someserver [.crit] CPU6: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU14: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU20: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Package power limit notification (total events = 12) Nov 7 10:15:15 someserver [.crit] CPU10: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU20: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU10: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU15: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU3: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU1: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU5: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU15: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU3: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU1: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU5: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU19: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Package power limit notification (total events = 377) Nov 7 10:15:15 someserver [.crit] CPU9: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU21: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU23: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU11: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU19: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU9: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU21: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU23: Package power limit notification (total events = 374) A little Google Fu reveals that this is typically associated with the CPU running hot, or voltage regulation kicking in. I don't think that's what is happening though. Temperature sensors for all servers in the cluster are running fine, Power Cap Policy is disabled in the iDRAC, and my System Profile is set to "Performance" on all of these servers: # omreport chassis biossetup | grep -A10 'System Profile' System Profile Settings ------------------------------------------ System Profile : Performance CPU Power Management : Maximum Performance Memory Frequency : Maximum Performance Turbo Boost : Enabled C1E : Disabled C States : Disabled Monitor/Mwait : Enabled Memory Patrol Scrub : Standard Memory Refresh Rate : 1x Memory Operating Voltage : Auto Collaborative CPU Performance Control : Disabled A Dell mailing list post describes the symptoms almost perfectly. Dell suggested that the author try using the Performance profile, but that didn't help. He ended up applying some settings in Dell's guide for configuring a server for low latency environments and one of those settings (or a combination thereof) seems to have fixed the problem. Kernel.org bug #36182 notes that power-limit interrupt debugging was enabled by default, which is causing performance degradation in scenarios where CPU voltage regulation is kicking in. A RHN KB article (RHN login required) mentions a problem impacting PE r620 and r720 servers not running the Performance profile, and recommends an update to a kernel released two weeks ago. ...Except we are running the Performance profile... Everything I can find online is running me in circles here. What's the heck is going on?

    Read the article

  • SGE: invoking qmake raises "critical error: can't resolve group"

    - by Pierre
    I'm new to SGE an I'm trying to run qmake with the simple following Makefile with our very new cluster: merge.txt: job1.txt job2.txt job3.txt ... cat $^ > $@ job1.txt: sleep 1 echo "Hello From " $@ > $@ sleep 1 job2.txt: sleep 2 echo "Hello From " $@ > $@ sleep 2 job3.txt: (...) the following command raises an error: qmake -l arch=lx24-amd64 -cwd -v PATH -- -j 4 sleep 1 dynamic mode sleep 2 dynamic mode sleep 3 dynamic mode sleep 4 dynamic mode critical error: can't resolve group qmake: *** [job3.txt] Error 1 critical error: can't resolve group qmake: *** [job2.txt] Error 1 critical error: can't resolve group qmake: *** [job1.txt] Error 1 critical error: can't resolve group what's wrong ?

    Read the article

  • How to enable WMI Provider MSCluster on MS Server 2008 R2

    - by Tobias Hertkorn
    I have successfully set up a failover cluster on Microsoft Server 2008 R2 Enterprise Edition. Now I want to talk to the MSCluster WMI Provider on said server. WMI Queries to e.g. CIMV2 successed. But queries like select * from MSCluster_ResourceGroup where MSCluster_ResourceGroup.Name=\"testserver\" fail with "Access denied". I am using a domain admin account. Do I have to enable the MSCluster WMI Provider? What am I missing?

    Read the article

  • Hardware firewall vs VMWare firewall appliance

    - by Luke
    We have a debate in our office going on whether it's necessary to get a hardware firewall or set up a virtual one on our VMWare cluster. Our environment consists of 3 server nodes (16 cores w/ 64 GB RAM each) over 2x 1 GB switches w/ an iSCSI shared storage array. Assuming that we would be dedicating resources to the VMWare appliances, would we have any benefit of choosing a hardware firewall over a virtual one? If we choose to use a hardware firewall, how would a dedicated server firewall w/ something like ClearOS compare to a Cisco firewall?

    Read the article

  • How much effort is SQL Server 2008 Administration?

    - by Adrian Grigore
    Hi, I am looking for a suitable hosting environment for an ASP.NET MVC application. One of the options I have is renting a Hyper-V server and installing my license of SQL Server 2008 on it. I'm a bit wary of shared hosting since the one I have tried so far did not seem to have very consistent performance. One potential problem is that I would have I do not not know much about SQL Server administration, so I am not sure if this is a good option. I've been running a failover cluster of two linux dedicated servers for over 5 years now and MySQL never gave me any trouble. But that was Linux, and it might be different with a windows system. Is running a halfway efficient MS SQL Server 2008 difficult? Does it require any in-depth administration knowledge? Or perhaps recurring administration effort (such as keeping the server up to date with the latest patches)? Or is it rather an "install and forget" experience similar to MySQL?

    Read the article

  • NFS caching on Ubuntu

    - by stream
    We run a bunch of ubuntu servers (mostly 8.04 LTS) which all mount an nfs share at /nfs. We use the nfs primarily for two purposes: symlinking config files (such as apache vhosts) reading & writing uploaded files This all works great except it makes us fully dependent on the central NFS server (which is a DRBD cluster with heartbeat failover from primary to secondary, but we've still seen issues). What we'd like is if we could mount the NFS through some local caching layer which would make any file which had previously been read remain available even if /nfs isn't. Writes could be disabled for this period. Searching around it looks like cachefilesd may be an option. Unfortunately, it's only packaged for ubuntu 9.10 & 10.04 it looks like. I was also looking for a FUSE-based solution which might fit the bill, but hadn't found anything yet. Any suggestions would be greatly appreciated!

    Read the article

  • What happens when the USB key or SD card I've installed VMware ESXi on fails?

    - by ewwhite
    An SD (SDHC) card installed in an HP ProLiant DL380p Gen8 server running VMware ESXi just failed. I encountered some rather ominous looking messages on the vCenter console and in the HP ProLiant ILO event log... Lost connectivity to the device ... backing the boot filesystem. As a result, host configuration changes will not be saved to persistent storage. Embedded Flash/SD-CARD: Error writing media 0, physical block 848880: Stack Exception. VMware advocates the use of USB and SD (SDHC) boot devices for ESXi. It was one of the main reasons the smaller footprint ESXi was developed (versus the older ESX). I've spent much time highlighting the differences between ESXi's installable and embedded modes to coworkers and clients. However, these failures do seem to happen. In this case, this is my third instance. Luckily, this is a vSphere cluster with SAN storage. What steps should be taken to remediate this failure?

    Read the article

  • Backing up Information Store - Recovering to Different Information Store / RSG

    - by Kip
    Hi All, I have a question on a situation, that hasn't yet arrisen but I wondered the possibilities and how we go about it. Currently we backup our Exchange 2003 Cluster with Backup exec. Currently it is set to backup the Microsoft Information Store on that server and all of the Mailbox Stores beneath it. We have previously used this in conjunction with a recovery storage group on the same server to recover lost mailboxes. However, due to space constrictions on that server ( a seperate issue that is being addressed in the very near future but outside of the scope of this question) we now don't have enough space on that server to do a recovery storage group type restore. Is it possible, to restore an information store, to a different server in the same administrative group (ie first)? By that I mean we have the following: Server1 | First Storage Group | Mailbox Store1/2/3 Could Mailbox Store 1 be restored to: Server2 | First Storage Group | Recovery Storage Group Both servers are under the same Administrative Group Currently for whatever reason ( mainly time) the mailboxes are not being backed up individually. Regards Kip

    Read the article

  • Connection lost to iscsi san after problem with san

    - by marceldejager
    Hi there, We have a cluster running with 3 VSpere ESX4 server that all are connected to a lefthand san through iSCSI. This all worked well but a week ago there was a problem with the san. After fixing that problem the VMWARE server continued working but this weekend I saw that 1 server didn't have a connection with the SAN. The other two work perfecly. I looked at all setting and I cannot find a problem. Nowhere I can't find any erros. Has anybode a idea? Thanks, Marcel

    Read the article

  • NetApp and SQL Server?

    - by Edinor
    Do you have any good or bad experiences to share running SQL Server OLTP Systems on NetApp appliances? I have been working with a small, relatively low-volume cluster with a lower-end NetApp device, and I have found the environment to be generally unstable, at least compared to my experiences with other SANs, iSCSI arrays, and DAS setups. I struggle to believe that RAID DP and WAFL are more than fairy-dust technologies. A solution has been proposed to me that I just need a bigger, better NetApp, with PAM cards and other cool technology I've not heard of, and I feel like I would be better off spending a quarter of that on good direct-attached drives and a beefy server. At the same time, I feel that an Enterprise-class SAN should be something I can count on to be consistently a more stable, better performer than the less expensive solution I might propose. Are you a SQL Server DBA in an OLTP environment and love your NetApp? If you don't like them, why not?

    Read the article

  • NLB 2 Windows Server 2003 Servers

    - by Paul Hinett
    I need to configure windows NLB on 2 dedicated servers I have. My main machine has been running for some time, with several domain names pointing to the servers primary IP address. Both servers have 2 NIC's installed, and both have several secondary public IP addresses available if needed? What IP address would I use for the cluster IP, does this IP need to be added to the IP list of both public NIC's ip address list? What IP addresses do I use for the host's dedicated IP? Please help, this is driving me nuts...i've taken down the server twice on accident today! Thank you in advance!

    Read the article

  • How can I expire non-active sessions on my Netscreen SSG140?

    - by David Mackintosh
    I have a Juniper Netscreen SSG-140. While experimenting with a VoIP service, I defined a custom policy that was to be used to permit the possible ports in use to be sent back to the VoIP server from systems connecting across the internet. Because I'd had problems in the past with VoIP systems getting broken when their UDP sessions were expired out faster than their keep-alives were generated, I set the timeout on this custom service to be 'never'. After much experimentation, I happened to notice that my session count on the firewall has grown from a couple thousand to over 36000. After discussion with the VoIP "expert", I set the timeout to be 30 minutes; however, all the sessions set up during the experimentation process are still there, more than 3 days later. Is there a way I can force these old sessions to get expired and removed from the session table, or am I looking at resetting my firewall? (Both firewalls, actually -- they are in a cluster.)

    Read the article

  • Exchange 2007 CCR: Logs not replicating to passive node partition

    - by yum_tacos4u
    In my environment I have setup Exchange 2007 in an CCR cluster, mirroring our main servers to a set of servers in passive mode. One of the partitions on the passive node that I have setup for the logs for Exchange 2007 has faulted, causing the partition to be unreadable. I have replaced the partition on the passive node, and setup the drive to mirror the one in active mode, but the logs are not replicating since the change. Is there anyway to force the replication of the new drive for the logs to the new partition? Any idea why the logs are not replicating? Any help or comments is appreciated, and thanks in advance.

    Read the article

  • Citrix XenServer iSCSI shared disk?

    - by chsguy
    I am running Citrix XenServer Essentials 5.5, with VMs stored on an EqualLogic iSCSI shelf, using XenServer's StorageLink. I would like to create a "virtual disk" that can be attached to multiple VMs. This would be used for a cluster file system like OCFS2 or GFS. This doesn't seem possible using the XenCenter GUI and I can't find anything online about how to do it. I realize I could simply expose the iSCSI network to the VM and have the VM initiate its own iSCSI, but that creates a lot of security challenges. This was pretty easy to do on Oracle VM Server, which is Xen based, so I know it's not a limitation of Xen itself. Maybe there is an "xe-" command for this? Thanks for any suggestions you can provide.

    Read the article

  • Find and free disk space that is unused but unavailable (due to file system error, etc.)

    - by Voyagerfan5761
    Sometimes I get the feeling that if an app such as μTorrent allocates files on my FAT32-formatted flash drive, but then is killed or crashes (as happens more than a few times a month), that space just disappears from my file system. Whether or not that is the case, sometimes I do get a chill from wondering if I've lost hundreds of MB in available storage due to carelessness or malfunctions. Checking my disk with WinDirStat just makes it worse, because I see the huge "<Unknown>" item at the disk root staring at me, eating up well over a gigabyte. It might be FS inefficiency (due to 32 or 64kb sector/cluster size and a lot of tiny files) or it might be a glitch... Is there a tool I can download and run to check my file system and make sure that there aren't any unused allocated blocks on the disk? I want to make sure I'm not losing any disk space to I/O errors, etc.

    Read the article

  • Windows FAT/NTFS Low-Level Disk Viewer (Norton DiskEdit alternative)

    - by Synetech inc.
    Hi, One of my most valuable software tools has always been Norton DiskEdit from Norton Utilities/Symantec Systemworks. I have used it for years for so many things, including, but not limited to learning file-systems. I am now in search of a similar tool that can let me view FAT and NTFS disks at a low and high level under Windows. I have seen Runtime Software’s DiskExplorer, but unfortunately, it is limited in a number of ways, particularly annoying is that it does not really let you view the disk as structured (it does not let you see directory entries from a directory that is fragmented like DiskEdit can without doing a too-exhaustive scan which doesn’t even work for FAT partitions that have cluster sizes <4KB). Does anyone know of a Windows alternative of a Norton DiskEditor? Thanks a lot.

    Read the article

  • Sun Grid Engine: Automatically Terminating Idle Interactive Jobs

    - by dmcer
    We're considering using Sun Grid Engine on a small compute cluster. Right now, the current set up is pretty crude and just involves having people ssh to an open machine to run their jobs. We'd like to allow interactive jobs, since that should ease the transition from manually starting jobs to starting them using qsub. But, there is some concern that, if we do, people might accidentally leave their interactive sessions idle and block other jobs from being run on the machines. The issue isn't just theoretical, since we previously tried using OpenPBS and there was a problem with people opening up an interactive job in a screen session and essentially camping on a machine. Is there anyway to configure SGE to automatically kill idle interactive jobs? It looks like this was requested as an enhancement (Issue #:2447) way back in 2007. But, it doesn't seem like the request ever got implemented.

    Read the article

  • Implementing a Linux-HA based clustering setup on Windows

    - by Alex
    I have a (tried and tested) setup involving: 2x Load balancing nodes on a floating IP via Heartbeat, load balancing 2 tomcat servers. 2x Tomcat servers 2x Galera Cluster MySQL servers synchronously replicating (+1 arbitrator node) All are evenly spread across 2 physical nodes. Now, I have to somehow get the same functionality on Windows Server (2008? I think) nodes .... running under Xen virtualization. There is no possibility to use Linux for any of the nodes. I count two main problems: No Linux-HA hearbeat daemon for the load balancing No Galera synchronous replication for MySQL I freely admit to having nearly no Windows knowledge when it comes to clustering. Is there a way to closely mimic the setup I have described or is it a total write-off?

    Read the article

  • HP DL580 G5 Hyper-V networking problem

    - by mr-perfect
    Hi I have installed the hyper-v role on a DL580 G5 cluster. The host operating system is a server 2008 x64 patched to the maximum. Everything has gone fine, but if I install a guest operating system, actually a windows server 2008 x64 I can't reach the network from it. It send many packets, but don't received any, so I can't ping the external network as well. I have installed the latest drivers to the server and unintalled the network configuration utility from the physical server, but no luck. I added a legacy adapter to the guest binded to the same phisycal adapter on the host machine but it can't help Any idea welcome...

    Read the article

  • High Memory Utilization on weblogic

    - by Anup
    My Weblogic App Server shows high Memory Utization. The application seems to perform good without any memeory issues. Now that my traffix is going to increase i am worried about the memory and have a feeling things could go bad and need to take action now and i am confused as to should i increase The JVM memory on the weblogic instances which means adding more physical memory or should i increase the number of managed instances in the cluster. Would like to understand what does having high memory utilization mean and the advantage and disadvantages of adding JVM memory and adding managed instances. Thanks

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >