Search Results

Search found 17852 results on 715 pages for 'load balancer'.

Page 541/715 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • vyatta Server Reboots by itself

    - by Fernando
    I have an issue regarding some hardware, maybe you can help me. First, I set up a Supermicro Superserver SYS-5016I-NTF with a Intel Xeon X3470 and 4 GB of Ram with a Hotlava Card Tambora 64G4 with Intel Chipset 82599EB and 4x10G SPF+ ports. Installed Vyatta community edition 6.3. I used it as router making BGP connections with 2 operators. No load at all, temp ranges normal. But the issue is that it reboots by itself in a ramdom way. Not very often, once every few days. But it is unacceptable for production purposes. So I try to test on different hardware, and installed Vyatta community edition 6.3 on a Dell PowerEdge 2950, with Xeon(R) E5345 @ 2.33GHz and 4 GB of Ram. Same Vyatta configuration as Supermicro Server. With same hotlava Card model ( I bought two of them ) Well I have reboots with this equipment as well. Same frecuency as above. I have checked syslog no strange logs until boot process starts to be logged. So it seems server reboot suddenly. I have installed latest driver for the chipset of the Hotlava card. Servers are placed in a datacenter with UPS So finally two things in common in both servers: Hotlava Card. Someone with issues with this card, or the chipset?? Could be it this card?? Vyatta 6.3 community edition. I don't thing is the problem. Is a regular Debian with packages to glue together different services. Or maybe is something I am missing. Andy ideas, suggestions?? Thank you very much... Fernando

    Read the article

  • Implications of disabling the AMD Phenom's TLB patch?

    - by DMA57361
    I'm currently running a AMD Phenom X4 9600 processor (yeah, it's aging a bit, but other recent problems mean it's not getting upgraded in the immediate future), which happens to be one of the chips that suffer from the TLB errata. I recall that the first time I played with disabling the TLB patch (probably over a year ago, while playing a game that had a severe performance problem such that it was almost unplayable unless the patch was disabled) I had at least one BSOD, but I can't remeber them being particularly frequent. However, because it decreased instability, I stopped disabling the patch once I was done with the game. Now, after some recent hardware changes I was experiancing much worse performance than expected from the new hardware under some circumstances, and the TLB jumped to mind - after testing I found that disabling the patch would improve the performance to expected levels. I'm now wondering if it's worthwhile always having the patch disabled to avoid any potential slowdowns cropping up in the future, or if it is too dangerous. Everything I read states that the bug, when not patched, can causes a system lock-up in "rare circumstances". So, with the TLB patch disabled: How frequently should system lock-ups be expected? Do we know what the circumstances that trigger the lock-ups are? (Don't worry too much about being highly technical, but essentially I wonder if the chip more vunerable under heavy load, or heavy memory usage, etc?) Are there any secondary problems I should be aware of? (Don't include things that are charateristic to all lock-ups, please)

    Read the article

  • Why is my mono/XSP site loading slow?

    - by acidzombie24
    I have two sites on the same server. One is loading perfectly and incredibly fast. The other is an equally complex site except a bit less javascript and 0 images. Its taking several seconds to load and there is a 1 in 3 chance that i get a Http500 error. WTF I grabbed the lastest 2.6.? version of mono, mod_mono and xsp (libgdiplus-2.6.7, xsp-2.6.5, mod_mono-2.6.3 and mono 2.6.7) This is whats in apache error.log [Mon Jan 03 19:33:40 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:33:40 2011] [error] Command stream corrupted, last command was 1 [Mon Jan 03 19:34:52 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:34:52 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:34:52 2011] [error] Command stream corrupted, last command was 1 [Mon Jan 03 19:34:52 2011] [error] Command stream corrupted, last command was 1 this is the page error Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny9 with Suhosin-Patch mod_mono/2.6.3 Server

    Read the article

  • Puppet&Hiera: $variable is not an hash or array when accessing it

    - by txworking
    I wrote a puppet module and the content of init.pp was: class install( $common_instanceconfig = hiera_hash('common_instanceconfig'), $common_instances = hiera('common_instances') ) { define instances { common { $title: name => $title, path => $common_instanceconfig[$title]['path'], version => $common_instanceconfig[$title]['version'], files => $common_instanceconfig[$title]['files'], pre => $common_instanceconfig[$title]['pre'], after => $common_instanceconfig[$title]['after'], properties => $common_instanceconfig[$title]['properties'], require => $common_instanceconfig[$title]['require'] , } } instances {$common_instances:} } And the hieradata file was: classes: - install common_instances: - common_instance_1 - common_instance_2 common_instanceconfig: common_instance_1 path : '/opt/common_instance_1' version : 1.0 files : software-1.bin pre : pre_install.sh after : after_install.sh properties: "properties" common_instance_2: path : '/opt/common_instance_2' version : 2.0 files : software-2.bin pre : pre_install.sh after : after_install.sh properties: "properties" I always got a error message When puppet agent run Error: common_instanceconfig String is not an hash or array when accessing it with common_instance_1 at /etc/puppet/modules/install/manifests/init.pp:16 on node puppet.agent1.tmp It seems $common_instances can be got correctly, but $commono_instanceconfig always be treated as a string. I used YAML.load_file to load the hieradata file, and got a correct hash object. Can anybody help?

    Read the article

  • Firefox not using Kerberos despite being configured to

    - by Nicolas Raoul
    I am deploying Linux/Firefox on a corporate Kerberos network. I followed this Kerberos-on-Firefox procedure but still Firefox does not connect via the company's Kerberos. I am using Firefox 3.0.18 on RedHat EL Server 5.5 Here is what I did: Run kinit on the command line to create a Kerberos ticket Check with klist: the ticket is valid until tomorrow, service principal is krbtgt/[email protected]. In Firefox, set network.negotiate-auth.trusted-uris and network.negotiate-auth.delegation-uris to .dc.thecompany.com. Load the company's portal page via its full hostname: http://server37.thecompany.com/alfresco. (note: server37 is actually the machine I am running Firefox on, but that should not be a problem I guess) PROBLEM: the company's intranet portal still serves me the login/password page. The same portal correctly uses Kerberos on Internet Explorer/Windows 7 machines, same settings, and shows the user's personal page. The server does not see any Kerberos request coming. Did I do something wrong? I enabled NSPR_LOG_MODULES=negotiateauth:5 as explained here, but the log file stays empty.

    Read the article

  • IIS login / logout interferes with media player

    - by Mark Sowul
    So I'm listening to music with Windows Media Player, going insane because the music randomly stops playback every so often. I finally notice that it correlates with instances of csrss, winlogon, and logonui repeatedly starting and quitting. I finally tracked that down to IIS repeatedly logging on and off due to WebDAV requests going through my user account (my laptop syncing up with OneNote over WebDAV). I see tons of spam in my security event log for the logins. I am surprised that IIS needs to log in this much. This has only been happening for a couple of months. I'm not sure where the actual problem lies - with IIS, or Media Player, or what, so I figured I'd try and find out if the IIS login behavior is actually abnormal. Is it normal for IIS to log in this much? And is it normal for that to keep spawning winlogon, csrss, and logonui every second or so? I see a constant stream of logon events in the event log every few seconds presumably while OneNote is syncing. Logon (id = 1, source = laptop), Special Logon (id = 1, sets up privileges), Special Logon (id = 1, seems to set up the same privileges), Logon (id = 2, same laptop), Logoff (id = 2), Logoff (id = 1) The DefaultAppPool (only one apparently in use) has its idle timeout set to 20 minutes, and load user profile set to false. Not sure what other settings (if any) might be relevant.

    Read the article

  • Tomcat Custom MBean

    - by Darran
    Does anyone know how to deploy a custom MBean to Tomcat? So far I`ve found this http://www.junlu.com/list/3/8871.html. I copied my jar with my MBean to Tomcat lib directory so the Custom class loader should pick it up. I then followed the instructions but I kept getting the exception below. My MBean does definitely have a public constructor. If I removed the jar from the tomcat lib directory I get the same message which suggests its not picking up my jar or my jar is being loaded after the Apache MBean Modeler is running in Tomcat. 06-Aug-2010 12:14:23 org.apache.tomcat.util.modeler.modules.MbeansSource execute SEVERE: Error creating mbean Bean:type=Bean javax.management.NotCompliantMBeanException: MBean class must have public constructor at com.sun.jmx.mbeanserver.Introspector.testCreation(Introspector.java:127) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.createMBean(DefaultMBeanServerInterceptor.java:2 at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.createMBean(DefaultMBeanServerInterceptor.java:1 at com.sun.jmx.mbeanserver.JmxMBeanServer.createMBean(JmxMBeanServer.java:393) at org.apache.tomcat.util.modeler.modules.MbeansSource.execute(MbeansSource.java:207) at org.apache.tomcat.util.modeler.modules.MbeansSource.load(MbeansSource.java:137) at org.apache.catalina.core.StandardEngine.readEngineMbeans(StandardEngine.java:517) at org.apache.catalina.core.StandardEngine.init(StandardEngine.java:321) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:411) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

  • One USB flash drive to rule them all

    - by Chris
    Yesterday I purchased a 32GB USB flash drive. I have a myrid of systems in my home, and would like to have one flash drive with setup files for all the various systems throughout the house. I kept the Fat 32 filesystem on the drive, as I figured that is probably the most universal. I then made the partition bootable using fdisk. I then copied the Windows 7 setup files to the drive. I then installed grub 2 (1.98) onto the drive using backtrack 5. I was then able to load the windows 7 setup / install from the flash drive on an older BIOS type motherboard. Now I would like to know how to get this to work on my MacBook Pro 8,2 with still retaining support for legacy computers. Is this possible, or is this just a pipe dream. I plan on getting OS X on the drive, gparted, and OS X86 on the drive when all is said and done. I've done various google searches but really haven't found a guide on how to setup a swiss army usb flash drive.

    Read the article

  • How can I write directly to my Zune HD hard drive?

    - by iamgoat
    When syncing photos to the Zune HD it resizes them down to a much lower resolution which means I cannot load a high res picture on it (comic book) and zoom in to read it. This defeats the whole purpose of having a zoom feature. There is a registry hack you can make to get the Zune to display under My Computer. Then if you killed the zune process while it's syncing you'd be able to access it like a hard drive and copy files to it. It seems like the more recent firmware and/or Zune software version now prevents this. How can I treat it like an HDD and copy files to it? I simply want to take my original pictures folder and copy it over the low resolution versions the Zune software resized it to. An alternative option would be to remove the hard drive from it and see if I can connect it to a computer directly, but I just got this and don't want to disassemble it yet. Note to Microsoft: Why do you allow me to set the encoding quality of music, but not photos?

    Read the article

  • Cluster FIle System

    - by Ben
    We are looking for to choose a clustered file system for our in house appplication. Let me first highlight my requirement. we have a storage and 2 servers at present.We get the data files from remote servers to our server and on both servers we are running our application to access those data and make a final result as per our requirements. In future may be after 3-4 months, we can add another servers in current cluster pool to handle more data load from remote location data senders. So my requirement is that to integrate same storage partition on 2-3 servers , it might be 4-5 more servers in future, My application read data from storage partition and write back to storage partition. Is there any bottleneck / limitation from RHCS , GFS2 or anything.? We are new with RHCS + GFS and all. Can we have any other better approach or someway to deal with our requirement light way? what is the best OS version for this ? how's RHEL 6.4 64 bit ? please share some case study or some gudie reference as per past experiences with such environnmnets Regards, Ben

    Read the article

  • qsub: How can I find out what DRM middleware exactly is installed on a cluster?

    - by gojira
    I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs. The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine. However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands. Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc. There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub". I tried qsub --version qsub: 1.2 2010/8/17 I am not sure what I am actually running when I invoke qsub on that cluster! My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?

    Read the article

  • WAMP server won't run with PHP 5.3.4 but will with PHP 5.2.11

    - by Ben Williams
    I have a 64bit Windows 7 Professional machine. I'm running WampServer Version 2.1 with Apache 2.2.4. It was installed on a clean machine. I'm using the default ini/conf files as they come. Wamp is installed in C:\wamp\, with php5.2 at C:\wamp\bin\php\php5.2.11 and php5.3 at C:\wamp\bin\php\php5.3.4. Both folders have the same permissions. When I run WAMP with 5.2.11 picked, it starts fine. When I run it with 5.3.4 picked, there are no errors in the Apache or PHP error logs, but I get The Apache service named reported the following error: httpd.exe: Syntax error on line 115 of C:/wamp/bin/apache/apache2.2.4/conf/httpd.conf: Cannot load C:/wamp/bin/php/php5.3.4/php5apache2_2.dll into server: The Apache service named is not a valid Win32 application. in my system application error logs. 5.2.11 calls C:/wamp/bin/php/php5.2.11/php5apache2_2.dll and that doesn't throw an error. What am I doing wrong?

    Read the article

  • Building a Web proxy to get around same-origin restrictions for collaborative Webapp based on a MEAN stack

    - by Lew Cohen
    Can anyone point to books, articles, blogs, or even applications - open-source or proprietary - that detail building a Web proxy? This specific proxy will exist to get around the same-origin restrictions that prevent, for instance, loading a given Website into an <iframe> in a Webapp. This Webapp is a collaborative application in which a group of users log in to the app's Website and can then load different Websites into this app's <iframe> and do various collaborative things (e.g., several users simultaneously browsing a Website, in synch). The Webapp itself is built on a MEAN stack (MongoDB, Express, AngularJS, and Node.js). The purpose of this proxy is not to do anonymous browsing or to bypass censorship. Information on how to build such a vehicle seems not to be readily available from my research. I've come across Glype but am not sure whether this is a feasible solution. I don't want to reinvent the wheel, so if a product is available for purchase, great. Else, we'd need to build one. The one that seems to be close is http://www.corsproxy.com. In effect, we'd like to re-create this since it evidently does what's needed. I don't care what server-side technology is used. Our app is MEAN-based, if that has any bearing. Also, the proxy has to obviously honor basic security considerations (user cookies, etc.) and eventually be scalable. So, anyone know of any sources that would detail how to build one of these? Is it even worth building if something already exists? If so, what would be a good candidate? Any other issues that should be considered with this proxy/application? Thanks a lot!

    Read the article

  • Windows 7 boot problem on a Lenovo Thinkpad Z61m 9450HAG

    - by Matt Taylor
    I recently did a full upgrade of Windows 7 on my Thinkpad. Everything worked fine after up until the second reboot (the first reboot after some updates installed worked OK). At second reboot time the system would just black screen before the Windows logo appears. Disk/wireless/power/battery lights are all lit and the disk light is active (flickering). However, if I remove my battery and boot with just power it boots fine and quickly, and everything is OK. Any help on why this won't boot with battery plugged in is greatly appreciated. I need to take this battery out on the road/trains, etc. A little more detail on this story. The battery I had inserted when doing the (failed) boot was a long life battery. I have not tried inserting this battery when Windows is logged in. I have another (normal life) battery that I have charged up within Windows. It has just got to 100% and I am about to reboot with it in. I am using the Lenovo power manager to diagnose the battery - all seems OK. I will report back shortly as to the outcome. OK, so I chose the reboot option from within Windows, the machine seemed to shutdown okay, but then stalled. It didn't turn off completely and didn't reboot, but just sat, with the fan humming, somewhere in between! I had to hold the power button in for a few seconds until the fan stopped and then hit the power button again to boot the machine from fresh. One good thing, with this battery (the normal one) it booted into Windows 7 the first time with a battery! So, now I have rebooting issues. I have 3 errors in the event log: A timeout was reached (30000 milliseconds) while waiting for the lxdxCATSCustConnectService service to connect. The lxdxCATSCustConnectService service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. The following boot-start or system-start driver(s) failed to load: cdrom Any thoughts?

    Read the article

  • Failing Windows Updates with Error Code 800719e4

    - by Kev
    On a number of Vista machines I have now come across the same error - when installing updates everything works fine, until it after it reboots and the rolls back during step 3. On all occasions (where a simple retry hasn't worked) the error code has been 800719e4. On my own laptop I have so far tried the following:- Installed the updates one by one manually - I started on the smallest and one by one have worked towards the largest one which has left me with "Security Update for Windows (KB2286198)" that refuses to install. Renamed all the files in "C:\Windows\Logs\CBS" to "xxx.old" where xxx was the original name with windows update turned off - no change Renamed all the folders in "C:\Windows\SoftwareDistribution" in the same manner - no change Attempted to install it manually "Windows6.0-KB2286198-x86.msu" - no change Tried to un-install IE8 - doesn't work, rolls back at the end (Installing the IE9 Beta when it launched was what alerted me to the issue on this laptop) Ran a "Fix It" thing from the Microsoft Website - no help (Can't find the link now). Tried to recover from the disk - but alas my laptop only has a recovery partition (and was unservice packed original). Ran with nothing running on startup, and only MS services - again no change. Google is being useless with a load of posts trying to get me to call a telephone number with letters in (presumably an American number) The error code appears to mean error log full but no one has any idea which log! The WinUpdate log does indicate the following is the error point though :- 2010-10-23 13:54:48:230 1240 738 Handler WARNING: Got extended error: "POQ Operation SetKeyValue OperationData \Registry\machine\Schema\wcm://Microsoft-Windows-shell32?version=6.0.6002.18287&language=neutral&processorArchitecture=x86&publicKeyToken=31bf3856ad364e35&versionScope=nonSxS&scope=allUsers\metadata\elements\HKEY_CLASSES_ROOT_lnkfile_shellex_DropHandler_defaultValue, @default, , ewAwADAAMAAyADEANAAwADEALQAwADAAMAAwAC0AMAAwADAAMAAtAEMAMAAwADAALQAwADAAMAAwADAAMAAwADAAMAAwADQANgB9AAAA" Has anyone any idea how to fix this once and for all - reinstalling laptop after laptop from scratch is mildly annoying at work where Office and Firefox are the only extras, but even more annoying at home - I don't fancy going through the palaver of reinstalling everything yet again.

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here? EDIT: It seems I can't even put ext3 or ext4 on it, they both complain about a corrupted journal. Gheh, guess something is really broken here.

    Read the article

  • Get Illegal Instruction error when booting Linux in VirtualBox, works fine when booted directly

    - by rkjnsn
    I have a computer on which I am dual booting Windows 7 and Gentoo Linux (both 64-bit). I want to be able to load up my Linux installation in a VM while I am booted into Windows. I have installed VirtualBox and followed the instructions for creating a raw disk VMDK. When I start the VM, Linux starts booting, but then fails with the following error when unlocking my root partition: truecrypt[441] trap invalid opcode ip:373615538e0 sp:3dd0e0dfb60 error:0 in libpixman-1.so.0[373614d6000+8d000] Everything works fine when I boot into Linux directly. What could cause an illegal instruction to be hit in libpixman only when booting in VirtualBox? Update: As a troubleshooting step, I recompiled pixman without "-march", and no longer get an illegal instruction error in that library. (The boot fails in the same spot with the same error in a different library, however.) How can I determine the specific opcode that isn't working in VirtualBox so I can disable it in my CFLAGS without having to disable all CPU-specific optimizations? I am still confused as to why there would be any user-mode instruction that would fail to work in a VM. Is this a known limitation? My CPU is an Intel Core i7 3720QM, and I have hardware virtualization support enabled.

    Read the article

  • Walkthrough/guide building aplication server for multi tenant web app [on hold]

    - by Khalid Adisendjaja
    The web app will detect a subdomain such as tenant1.app.com, tenant2.app.com, etc to identify tenant environment, each tenant environment will have a different database credential (port,db name,etc) but still connecting to the same database server. Each tenant should use app.com for their main domain, using their own domain is prohibitted. Each tenant will have their own rest api endpoint such as tenant1.app.com/api/v1/xxxx, tenant2.app.com/api/v1/xxxx, tenant3.app.com/api/v1/xxxx I've come to a simple solution by setting a wildcard subdomain (*.app.com) on webserver Apache/Nginx vhost configuration file. I have googled so many concept for building a multi-tenant app server but still don't understand how to really done it, what is the right way to do it and what is actually required to do this task. So I've come to this questions, Do I need a proxy server, dns masking, etc.. How to monitor each tenants activity What about server performance, load balancing, and scalability How to setup ssl certificate for each tenant what about application cache for each tenant Is it reliable to use the setup for production etc ... I have a very litte experience on server infrastructure, so I'm looking for a DIY walkthrough, step by step guide, or sophisticate solution ready to implemented for production

    Read the article

  • JVM disappeared on Mac OS X Snow Leopard 10.6.8

    - by weisjohn
    I'm working in Eclipse one night, (also using Android's DDMS from the commandline). The next morning, I open the lid... attempt to run Eclipse and get an error. me$ sudo /Applications/eclipse/eclipse JavaVM: requested Java version ((null)) not available. Using Java at "" instead. JavaVM: Failed to load JVM: /bundle/Libraries/libserver.dylib So I then attempt to find out where my JDKs are pointed: me$ ls -la /System/Library/Frameworks/JavaVM.framework/Versions/ total 64 drwxr-xr-x 12 root wheel 408 Nov 16 10:44 . drwxr-xr-x 12 root wheel 408 Sep 7 09:39 .. lrwxr-xr-x 1 root wheel 5 Sep 7 17:07 1.3 -> 1.3.1 drwxr-xr-x 3 root wheel 102 Dec 2 2009 1.3.1 lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.4 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.4.2 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.5 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.5.0 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.6 -> CurrentJDK drwxr-xr-x 9 root wheel 306 Nov 16 10:44 A lrwxr-xr-x 1 root wheel 1 Sep 7 17:07 Current -> A lrwxr-xr-x 1 root wheel 59 Sep 7 17:07 CurrentJDK -> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents Everything looks normal so far... me$ ls -la /System/Library/Java/JavaVirtualMachines/ total 0 drwxr-xr-x 2 root wheel 68 Nov 16 10:44 . drwxr-xr-x 5 root wheel 170 Nov 16 10:44 .. Apparently, my virtual machines have been deleted or moved? I'll probably be able to just re-install Java, but does anyone have any insight into why this may have happened or how to prevent in the future?

    Read the article

  • FastGate A20 Line And Himem.sys Issue With Updating BIOS

    - by Boris_yo
    I have been persistent with a thought to perform my first BIOS update ever through MS-DOS but have been postponing this task until today. Despite people telling me any bootable ISO will do it either through CD-ROM or RAMDRIVE, I am still having problems. First is the problem with CD-ROM driver trying to make it work with 4 driver files (cd1.SYS, cd2.SYS, cd3.SYS, cd4.SYS) as well as starting RAMDISK proved to be failure: CD-ROM XMS Allocation Error RAMDISK XMS Allocaton Error (X: and R: drives not working) This A20 line seemed to be the obstacle which then after a couple of searches pointed me to this article on Microsoft website. It seems that FastGate is the culprit which takes over A20 line and conflicts with himem.sys which should be handling it causing the driver to be unable to allocate memory resources. Albeit article suggests 2 workarounds which is disabling FastGate option or adding switch, I read that the former workaround could cause problems which involves later tinkering BIOS, disabling shadow copy etc. while the latter workaround can just hang system as stated in the link above. I assume it just hangs the boot process from image file though. Summing up the above, I am cautious and think it is risky to follow both workarounds because disabling FastGate or trying adding switch by trying available switches from 1-14 or 16, could crash the BIOS update process by itself. I could do this without the need for himem.sys with bootable USB thumbdrive by making it to be seen as USB-HDD, but some time ago I read that it is never a good idea to update BIOS from hard drive so even thought it is simulation, who knows... Maybe it will deactivate hard drive in the middle of the BIOS update process or even USB thumbdrive per se? One forum discussion was about updating BIOS and somebody suggested to not load himem.sys for some reason, but now that I think of it, what if BIOS update needs upper memory?

    Read the article

  • Access Existing Linux Installation from Windows Bootloader - Linux Installed and Then Windows 7

    - by nicorellius
    My hard drive is failing. I installed a new drive, and the installed Ubuntu 12.04 to try to rescue data. Realized that I wanted to install Windows and just access drives without transferring data (there are loads of it). I then installed Windows 7. Now my drive only boots to Windows 7. How do I access Linux and make Windows boot-loader load Linux? I originally thought I would just not use the Ubuntu installation but now I think I want it to be a dual boot system. What are good boot-loaders to install for this kind of dual boot situation? Can I accomplish this with BCDEdit? If I would have done this the other way (Windows first, then Linux) everything would be fine, but I didn't and now I'm trying to fix this. The problem I have is that I don't really know how to boot into Linux to retrieve any files I need. I guess using the disc would work, but I'm not sure how to go about this.

    Read the article

  • Display with intel integrated graphics, bitcoin mine with Radeon 6950

    - by karategeek6
    I'm on Ubuntu Linux 11.04 64 bit. I have an intel i5 with integrated graphics and a Radeon 6950, with one monitor. I would like to run my graphics on the integrated card, and run bitcoin mining on the 6950. I have bitcoin mining working when I use the 6950 for both display and mining. Every time I try and and use the integrated graphics instead, OpenCL doesn't recognize my 6950. Using aticonfig --initial when using the integrated graphics for display breaks things. So I used the xorg.conf it created as a basis and tried to manually edit it. I really don't know what I'm doing, though. My last attempt is given below. The graphics ran off the integrated card, but the 6950 wasn't recognized. Any help would be greatly appreciated! xorg.conf: #Section "ServerLayout" # Identifier "Intel Layout" # Screen "Default Screen" # Identifier "aticonfig Layout" # Screen "aticonfig-Screen[0]-0" # Screen 0 "aticonfig-Screen[0]-0" 0 0 #EndSection Section "Module" Load "glx" EndSection # Intel Section "Device" Identifier "Intel Integrated Graphics" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Monitor" Identifier "Default Monitor" Option "VendorName" "Monitor Vendor" Option "ModelName" "Monitor Name" Option "DPMS" "true" EndSection Section "Screen" Identifier "Default Screen" Device "Intel Integrated Graphics" Monitor "Default Monitor" DefaultDepth 24 EndSection # ATI Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" Device "aticonfig-Device[0]-0" Monitor "aticonfig-Monitor[0]-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >