Search Results

Search found 1071 results on 43 pages for 'pessimistic locking'.

Page 32/43 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Implementing the Reactive Manifesto with Azure and AWS

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/31/implementing-the-reactive-manifesto-with-azure-and-aws.aspxMy latest Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS has just been published! I’d planned to do a course on dual-running a messaging-based solution in Azure and AWS for super-high availability and scale, and the Reactive Manifesto encapsulates exactly what I wanted to do. A “reactive” application describes an architecture which is inherently resilient and scalable, being event-driven at the core, and using asynchronous communication between components. In the course, I compare that architecture to a classic n-tier approach, and go on to build out an app which exhibits all the reactive traits: responsive, event-driven, scalable and resilient. I use a suite of technologies which are enablers for all those traits: ASP.NET SignalR for presentation, with server push notifications to the user Messaging in the middle layer for asynchronous communication between presentation and compute Azure Service Bus Queues and Topics AWS Simple Queue Service AWS Simple Notification Service MongoDB at the storage layer for easy HA and scale, with minimal locking under load. Starting with a couple of console apps to demonstrate message sending, I build the solution up over 7 modules, deploying to Azure and AWS and running the app across both clouds concurrently for the whole stack - web servers, messaging infrastructure, message handlers and database servers. I demonstrating failover by killing off bits of infrastructure, and show how a reactive app deployed across two clouds can survive machine failure, data centre failure and even whole cloud failure. The course finishes by configuring auto-scaling in AWS and Azure for the compute and presentation layers, and running a load test with blitz.io. The test pushes masses of load into the app, which is deployed across four data centres in Azure and AWS, and the infrastructure scales up seamlessly to meet the load – the blitz report is pretty impressive: That’s 99.9% success rate for hits to the website, with the potential to serve over 36,000,000 hits per day – all from a few hours’ build time, and a fairly limited set of auto-scale configurations. When the load stops, the infrastructure scales back down again to a minimal set of servers for high availability, so the app doesn’t cost much to host unless it’s getting a lot of traffic. This is my third course for Pluralsight, with Nginx and PHP Fundamentals and Caching in the .NET Stack: Inside-Out released earlier this year. Now that it’s out, I’m starting on the fourth one, which is focused on C#, and should be out by the end of the year.

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

  • Setting up Tomcat6 properly in Ubuntu 10.04

    - by aasukisuki
    We have a Tomcat6 instance running on Ubuntu 10.04LTS. Our test box was just a Windows machine running Tomcat6. Both machines (Linux and Windows) have 1GB of ram. Via the Tomcat configuration tool in windows, I was able to set the min/max/permgen sizes of the JVM. Those were set to 256/512/128 respectively. Now on the Ubuntu box, I've tried setting the JVM options in several different places including: Adding JAVA_OPTS & CATALINA_OPTS in /etc/environment Adding JAVA_OPTS in $CATALINA_HOME/bin/catalina.sh Creating setenv.sh and adding JAVA_OPTS in $CATALINA_HOME/bin Adding JAVA_OPTS directly to /etc/init.d/tomcat6 Un-commenting the JAVA_OPTS and modifying it in /etc/default/tomcat6 Nearly all of those methods did not work, except for modifying /etc/init.d/tomcat6 directly (and possibly the /etc/default/tomcat6 change, but I just did that). However, my understanding is that when you change these settings, only one JVM should be used for the entire tomcat6 instance, and that memory is shared among the applications. On our windows box, tomcat6 is run as a service, and appears to behave this way. However, when I look at htop on the linux box, there are 20+ tomcat6 instances (I have an app that triggers internal jobs every X seconds using chron, so maybe these are threads? Or are they actual instances) all with those memory settings. The app runs fine for a bit, but eventually ends up locking up. I'm guessing each of these apps thinks it has 512m to work with and never GC's and then locks tomcat up completely. What is the proper way to set all of this up?

    Read the article

  • Account Lockout with pam_tally2 in RHEL6

    - by Aaron Copley
    I am using pam_tally2 to lockout accounts after 3 failed logins per policy, however, the connecting user does not receive the error indicating pam_tally2's action. (Via SSH.) I expect to see on the 4th attempt: Account locked due to 3 failed logins No combination of required or requisite or the order in the file seems to help. This is under Red Hat 6, and I am using /etc/pam.d/password-auth. The lockout does work as expected but the user does not receive the error described above. This causes a lot of confusion and frustration as they have no way of knowing why authentication fails when they are sure they are using the correct password. Implementation follows NSA's Guide to the Secure Conguration of Red Hat Enterprise Linux 5. (pg.45) It's my understanding that that only thing changed in PAM is that /etc/pam.d/sshd now includes /etc/pam.d/password-auth instead of system-auth. If locking out accounts after a number of incorrect login attempts is required by your security policy, implement use of pam_tally2.so. To enforce password lockout, add the following to /etc/pam.d/system-auth. First, add to the top of the auth lines: auth required pam_tally2.so deny=5 onerr=fail unlock_time=900 Second, add to the top of the account lines: account required pam_tally2.so EDIT: I get the error message by resetting pam_tally2 during one of the login attempts. user@localhost's password: (bad password) Permission denied, please try again. user@localhost's password: (bad password) Permission denied, please try again. (reset pam_tally2 from another shell) user@localhost's password: (good password) Account locked due to ... Account locked due to ... Last login: ... [user@localhost ~]$

    Read the article

  • ActiveSync devices causing accounts to lockout

    - by Abdullah
    When a user changes his account password for whatever reason (read: expired), and the old password is stored in his mobile device connected through EAS. This will cause his account almost immediately - as it should according to the lockout policy defined in the AD. It was easy to figure out that part. The hard part is keeping it from happening. I looked everywhere. Nothing. Basically there are four parts to the puzzle: the EAS device, the TMG (ISA) server, the EAS protocol and finally the AD. None of them have a way to stop the EAS device from failing to authenticate. So I figured I'll have to come up with a clever workaround. And the only thing I could come up with is to create a group for all EAS users and exclude them from the lockout policy, which obviously defeats the whole purpose of the policy, or to educate the users to update their devices with the new passwords, which is impossible. The question: Can you think of any other way to prevent EAS from locking out the accounts? Environment: Mostly iOS devices all through EAS. TMG 2010. Exchange 2007. AD 2008 R2.

    Read the article

  • SSTP BPDU with bad TLV and macflap -- info please

    - by Adeodatus
    Hi All, I'm slowly locking down the network I've inherited and mac-flapping has been a problem in the past with customers doing all kinds of crazy things. Thats changing but I am now encountering this error: Dec 30 18:31:31 10.50.1.50 1565: 001567: Dec 30 18:31:30: %SW_MATM-4-MACFLAP_NOTIF: Host xxxx.xxxx.f681 in vlan 1 is flapping between port Gi0/5 and port Gi0/48 Dec 30 18:43:28 10.50.1.50 1566: 001568: Dec 30 18:43:26: %SPANTREE-2-RECV_BAD_TLV: Received SSTP BPDU with bad TLV on GigabitEthernet0/5 VLAN1. Dec 30 18:48:18 10.50.1.50 1567: 001569: .Dec 30 18:48:17: %SPANTREE-2-RECV_BAD_TLV: Received SSTP BPDU with bad TLV on GigabitEthernet0/5 VLAN1. unfortunately, that mac address is the mac of our core router, the only link to the internet, on port gi0/48 On the other end of gi0/5, I have about 50 bridged customer machines connected through a series of managed and unmanaged L2 switches. Yes, on VLAN1 too ... like I said, working on changing this slowly. In the mean time, it has me quite baffled on how to deal with this and track down the customer or switch that is the problem. What else could be going on with these messages ... the bad TLV is a new one for me. Any ideas? Thank you and Happy New Year to you all!!

    Read the article

  • APC + FPM memory fragmentation without a reason

    - by palmic
    My setup @ debian 6-testing is: nginx 1.1.8 + php 5.3.8 + php-fpm + APC 3.1.9 Because i use automatic deployment with apc_clear_cache() after deploy, my target is to set up APC to cache all my project (up to 612 php files smaller than 1MB) without files changes checking. My confs: FPM: pm.max_children = 25 pm.start_servers = 4 pm.min_spare_servers = 2 pm.max_spare_servers = 10 pm.max_requests = 500 APC: pc.enabled="1" apc.shm_segments = 1 apc.shm_size="64M" apc.num_files_hint = 640 apc.user_entries_hint = 0 apc.ttl="7200" apc.user_ttl="7200" apc.gc_ttl="600" apc.cache_by_default="1" apc.filters = "apc\.php$,apc_clear.php$" apc.canonicalize="0" apc.slam_defense="1" apc.localcache="1" apc.localcache.size="512M" apc.mmap_file_mask="/tmp/apc-php5.XXXXXX" apc.enable_cli="0" apc.max_file_size = 2M apc.write_lock="1" apc.localcache = 1 apc.localcache.size = 512 apc.report_autofilter="0" apc.include_once_override="0" apc.coredump_unmap="0" apc.stat="0" apc.stat_ctime="0" The problem is i have my APC memory fragmented into 5 pieces at 14,5MB loaded into memory which capacity is 64M. My system memory is 640MB, used about 270MB at most. The http responses lasts about 300ms - 5s. When i switch on apc.stat="1", the responses are about 50ms - 80ms which is quite good, but i cannot understand why is apc.stat="0" so much whorse. The APC diagnostic tool shows "1 Segment(s) with 64.0 MBytes (mmap memory, pthread mutex Locks locking)" in general cache information window so i hope i am right that it's mmap setup which allows tweaking APC shm onto upper values than shows system limit in /proc/sys/kernel/shmmax (my shows 33MB).

    Read the article

  • mysqladmin - Unknown MySQL server host

    - by ert
    I'm trying to connect to a mysql server over a local network. The server is running and listening to post 41322. dylan~$ netstat -ln | s mysql unix 2 [ ACC ] STREAM LISTENING 41322 /var/run/mysqld/mysqld.sock My user is granted all rights from all addresses, and I can log in locally. dylan~$ mysqladmin -P 41322 -h [email protected] create database test mysqladmin: connect to server at '[email protected]' failed error: 'Unknown MySQL server host '[email protected]' (1)' Check that mysqld is running on [email protected] and that the port is 41322. You can check this by doing 'telnet [email protected] 41322' Adding a --verbose flag gives no additional output. I've commented out bind-address=127.0.0.1 in /etc/mysql/my.cnf on the server. I can ssh into the server without a problem. dylan~$ ps a | grep mysql 11131 pts/3 S 0:00 /bin/sh /usr/bin/mysqld_safe 11170 pts/3 Sl 0:03 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock 11171 pts/3 S 0:00 logger -p daemon.err -t mysqld_safe -i -t mysqld 13710 pts/1 S+ 0:00 grep mysq Any help or thoughts are appreciated.

    Read the article

  • mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect

    - by photon
    When I'm trying to install MySQL 5.5 community edition on my Ubuntu 10.04 by compiling the source code, I met the following problem: $ fg % 1 sudo ../bin/mysqld_safe --basedir=/usr/local/mysql_community_5.5/data --user=mysql --defaults-file=/etc/my.cnf [sudo] password for linnan: Sorry, try again. [sudo] password for linnan: 121023 09:26:21 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:21 mysqld_safe Logging to '/var/log/mysql/error.log'. Internal program error (non-fatal): unknown logging method '/usr/local/mysql_community_5.5/log/mysql.log' 121023 09:26:22 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121023 09:26:23 mysqld_safe mysqld from pid file /var/lib/mysql/ubuntu.pid ended It seems that the problem is related to log configuration. I've noticed a bugfix related to this problem: http://bugs.mysql.com/bug.php?id=50083 But I still have no idea how to solve it. The relative content in /etc/my.cnf: [mysqld] port = 3306 socket = /tmp/mysql.sock skip-external-locking key_buffer_size = 384M max_allowed_packet = 1M table_open_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 character-set-server=utf8 [mysqld-safe] basedir=/usr/local/mysql_community_5.5 datadir=/usr/local/mysql_community_5.5/data mysqld_safe_syslog.cnf: /etc/mysql/conf.d/mysqld_safe_syslog.cnf: [mysqld_safe] syslog

    Read the article

  • MySQL binlogs seems incomplete?

    - by warl0ck
    I created a Database, a table and inserted some data, and found this binlog.0000001 in my log folder, but when I do mysqlbinlog binlog.0000001, it only shows stuff below, seems incomplete: (There's only two files in the log dir: binlog.000001 binlog.index) /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #120924 21:12:56 server id 1 end_log_pos 107 Start: binlog v 4, server v 5.5.24-0ubuntu0.12.04.1-log created 120924 21:12:56 at startup # Warning: this binlog is either in use or was not closed properly. ROLLBACK/*!*/; BINLOG ' GAVhUA8BAAAAZwAAAGsAAAABAAQANS41LjI0LTB1YnVudHUwLjEyLjA0LjEtbG9nAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAYBWFQEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA== '/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; If this warning was the cause: Warning: this binlog is either in use or was not closed properly.. How do I force close the log? EDIT After flush logs command, I see "0 rows" affected, and a few new files, binlog.000001 binlog.000002 binlog.000003 binlog.000004 binlog.index, the contents are nearly the same as binlog.000001. Now I dropped the database, and try restore it with mysqlbinlog binlog.0* | mysql -u root -p, but the database wasn't recovered. EDIT 2 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking log-bin=/var/log/mysql/binlog binlog-do-db=mydb bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M expire_logs_days = 10 max_binlog_size = 100M P.S /var/log/mysql{.err,.log} are both empty

    Read the article

  • Cant connect to MySQL server from Java application

    - by RN
    This is on VPS\Centos server. The MySQL server is pre configured. I am running the Java application on Tomcat My Java web application is not able to connect to the MySQL server. I get an error - "Caused by: java.net.ConnectException: Connection refused" I suspect this to be a configuration problem rather than a coding problem- hence I have posted this on ServerFault And yes, The same web-app is able to connect to MySQL on a different linux box This is the URL that I provided to my Java application (note- it assumes default port) url = "jdbc:mysql://localhost/pickupgames" My first suspicion was that I am running on a non-default port So I tried to find the port where mySQL server is running I tried every trick mentioned in http://serverfault.com/questions/116100/how-to-check-what-port-mysql-is-running-on But no luck ! SHOW GLOBAL VARIABLES LIKE 'PORT'; This shows port 0 netstat -tlnp doesn't show mysql at all /etc/my.cnf It has no port entry telnet localhost 3306 Doesn't connect And in case you are wondering if mysql server is running at all or not It is And I know for sure, because I have been able to login using the mysql command Also # ps -ef|grep 'mysql' root 31839 27662 0 00:49 pts/3 00:00:00 grep mysql root 32452 1 0 Apr02 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --skip-grant-tables --skip-networking mysql 32504 32452 0 Apr02 ? 00:00:06 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock --skip-grant-tables --skip-networking Please note the --skip-networking parameter Does this have something to do with the issue ? Any explanation why I cant connect to mysql server on port 3306 by telnet? Or why it docent show up under netstat? Any suggestion on whet I should try next ?

    Read the article

  • Problem running mysql client, cannot connect to mysql server

    - by ehsanul
    Edit3: Thanks for the help everyone. Sorry for wasting anybody's time, but it seems like a simple reboot solved it. I should've known better, but I just had the assumption that the "restart" solution is mostly valid just for MS Windows (no offense). I'll keep this in mind before I ask a question here again. I installed the mysql-client-5.0 and mysql-server-5.0 packages on Ubuntu 8.04, using sudo apt-get install. When I try to run the "mysql" command, I get the following error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) To verify that mysql server is running, I tried this, and it does seem to be running, with the correct socket too: $ ps aux | grep mysql root 13388 0.0 0.0 1772 528 ? S 06:24 0:00 /bin/sh /usr/bin/mysqld_safe mysql 13553 0.0 1.4 127012 15332 ? Sl 06:25 0:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --skip-external-locking --port=3306 --socket=/var/run/mysqld/mysqld.sock root 13555 0.0 0.0 3008 696 ? S 06:25 0:00 logger -p daemon.err -t mysqld_safe -i -t mysqld ehsanul 16910 0.0 0.0 3092 772 pts/4 R+ 07:17 0:00 grep mysql So I don't understand why I'm getting an error trying to connect to mysql server. Note that I'm completely new to mysql. Edit: As requested in comments, the exact command that is returning the error is simply "sudo mysql". And when I check netstats for active networks services, I do see an entry for port 3306, with Protocol: tcp, IP Source: 127.0.0.1, State: LISTEN Edit2: It appears as if the /var/run/mysqld/mysqld.sock socket doesn't exist (if I'm interpreting the following output correctly): $ ls -al /var/run/mysqld/ total 0 drwxr-xr-x 2 mysql root 40 2009-08-06 06:36 . drwxr-xr-x 20 root root 860 2009-08-06 06:25 ..

    Read the article

  • nginx+mysql5 loadtesting configuration strangeness

    - by genseric
    i am trying to setup a new server running on debian6 and trying to make it work smooth under load. i ve used a wordpress site as a test object, and tried the configurations on http://blitz.io. when i increase the mysql max_connections from 50 to 200 lots of timeouts start to occur. but on 50 , no timeouts and pretty well response times. nginx configuration is fine , i tuned the config so i dont see errors. so i presume it's related to the other configuration options of my.cnf . i read some about options but still cant find what max_connections problem is all about. btw, the server has 16gb of ram and a fine i7 cpu. here is the current my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] wait_timeout=60 connect_timeout=10 interactive_timeout=120 user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking bind-address = 127.0.0.1 key_buffer = 384M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 20 myisam-recover = BACKUP max_connections = 50 table_cache = 1024 thread_concurrency = 8 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M thanks in advance. i asked this question on SO but it's closed as off topic so i believe this is a SF question.

    Read the article

  • Why can I not access the internet when Windows 7 finds no issue with the ethernet connection and the network can see my device?

    - by WannabeCoder
    So I just moved from a house to an apartment. In the house and the apartment I had Uverse set up - and in both I had my desktop connected via a ~40 foot long cat5 cable. However, upon moving to the apartment I found that my ethernet connection no longer provides internet. This would seem like a mundane problem if not for: The router can see the computer on the network Windows 7 (the desktop's OS) detects no problems with the ethernet connection. Connections over the internet (i.e. browser windows, Pandora, etc.) do not immediately fail. Instead they load for 2 minutes and then finally give up. Devices connected over the Wifi (PS4, Laptop) access the internet just fine While removing the cat5 cable from my house, I accidentally damaged the locking tab but managed to bend it back into the appropriate position. I would suspect that a bad cat5 cable might be to blame if not for the above issues (thought I've heard bad cat5 cables cause the most nonsensical problems) and the fact that I tested the cat5 cable by having it share internet between my laptop (working internet) to my desktop and it functioned just fine and provided the desktop with internet. My ipconfig /all successfully finds a default gateway, DHCP server, and DNS server. What could possibly be causing the problem?

    Read the article

  • High Load mysql on Debian server stops every day. Why?

    - by Oleg Abrazhaev
    I have Debian server with 32 gb memory. And there is apache2, memcached and nginx on this server. Memory load always on maximum. Only 500m free. Most memory leak do MySql. Apache only 70 clients configured, other services small memory usage. When mysql use all memory it stops. And nothing works, need mysql reboot. Mysql configured use maximum 24 gb memory. I have hight weight InnoDB bases. (400000 rows, 30 gb). And on server multithread daemon, that makes many inserts in this tables, thats why InnoDB. There is my mysql config. [mysqld] # # * Basic Settings # default-time-zone = "+04:00" user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking default-time-zone='Europe/Moscow' # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. # # * Fine Tuning # #low_priority_updates = 1 concurrent_insert = ALWAYS wait_timeout = 600 interactive_timeout = 600 #normal key_buffer_size = 2024M #key_buffer_size = 1512M #70% hot cache key_cache_division_limit= 70 #16-32 max_allowed_packet = 32M #1-16M thread_stack = 8M #40-50 thread_cache_size = 50 #orderby groupby sort sort_buffer_size = 64M #same myisam_sort_buffer_size = 400M #temp table creates when group_by tmp_table_size = 3000M #tables in memory max_heap_table_size = 3000M #on disk open_files_limit = 10000 table_cache = 10000 join_buffer_size = 5M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #myisam_use_mmap = 1 max_connections = 200 thread_concurrency = 8 # # * Query Cache Configuration # #more ignored query_cache_limit = 50M query_cache_size = 210M #on query cache query_cache_type = 1 # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log server-id = 1 log-bin = /var/lib/mysql/mysql-bin #replicate-do-db = gate log-bin-index = /var/lib/mysql/mysql-bin.index log-error = /var/lib/mysql/mysql-bin.err relay-log = /var/lib/mysql/relay-bin relay-log-info-file = /var/lib/mysql/relay-bin.info relay-log-index = /var/lib/mysql/relay-bin.index binlog_do_db = 24avia expire_logs_days = 10 max_binlog_size = 100M read_buffer_size = 4024288 innodb_buffer_pool_size = 5000M innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 8 table_definition_cache = 2000 group_concat_max_len = 16M #binlog_do_db = gate #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. #skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 500M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M key_buffer_size = 512M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ Please, help me make it stable. Memory used /etc/mysql # free total used free shared buffers cached Mem: 32930800 32766424 164376 0 139208 23829196 -/+ buffers/cache: 8798020 24132780 Swap: 33553328 44660 33508668 Maybe my problem not in memory, but MySQL stops every day. As you can see, cache memory free 24 gb. Thank to Michael Hampton? for correction. Load overage on server 3.5. Maybe hdd or another problem? Maybe my config not optimal for 30gb InnoDB ? I'm already try mysqltuner and tunung-primer.sh , but they marked all green. Mysqltuner output mysqltuner >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.24-9-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 112G (Tables: 1528) [--] Data in InnoDB tables: 39G (Tables: 340) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 344 -------- Performance Metrics ------------------------------------------------- [--] Up for: 8h 18m 33s (14M q [478.333 qps], 259K conn, TX: 9B, RX: 5B) [--] Reads / Writes: 84% / 16% [--] Total buffers: 10.5G global + 81.1M per thread (200 max threads) [OK] Maximum possible memory usage: 26.3G (83% of installed RAM) [OK] Slow queries: 1% (259K/14M) [!!] Highest connection usage: 100% (201/200) [OK] Key buffer size / total MyISAM indexes: 1.5G/5.6G [OK] Key buffer hit rate: 100.0% (6B cached / 1M reads) [OK] Query cache efficiency: 74.3% (8M cached / 11M selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 247K sorts) [!!] Joins performed without indexes: 106025 [!!] Temporary tables created on disk: 49% (351K on disk / 715K total) [OK] Thread cache hit rate: 99% (249 created / 259K connections) [!!] Table cache hit rate: 15% (2K open / 13K opened) [OK] Open file limit used: 15% (3K/20K) [OK] Table locks acquired immediately: 99% (4M immediate / 4M locks) [!!] InnoDB data size / buffer pool: 39.4G/5.9G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce or eliminate persistent connections to reduce connection usage Adjust your join queries to always utilize indexes Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Variables to adjust: max_connections (> 200) wait_timeout (< 600) interactive_timeout (< 600) join_buffer_size (> 5.0M, or always use indexes with joins) table_cache (> 10000) innodb_buffer_pool_size (>= 39G) Mysql primer output -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.5.24-9-log x86_64 Uptime = 0 days 8 hrs 20 min 50 sec Avg. qps = 478 Total Questions = 14369568 Threads Connected = 16 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1.000000 sec. You have 260626 out of 14369701 that take longer than 1.000000 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is enabled Binlog sync is not enabled, you could loose binlog records during a server crash WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 45 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine MAX CONNECTIONS Current max_connections = 200 Current threads_connected = 11 Historic max_used_connections = 201 The number of used connections is 100% of the configured maximum. You should raise max_connections INNODB STATUS Current InnoDB index space = 214 M Current InnoDB data space = 39.40 G Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 5.85 G Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 23.46 G Configured Max Per-thread Buffers : 15.84 G Configured Max Global Buffers : 7.54 G Configured Max Memory Limit : 23.39 G Physical Memory : 31.40 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 5.61 G Current key_buffer_size = 1.47 G Key cache miss rate is 1 : 5578 Key buffer free ratio = 77 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is enabled Current query_cache_size = 200 M Current query_cache_used = 101 M Current query_cache_limit = 50 M Current Query cache Memory fill ratio = 50.59 % Current query_cache_min_res_unit = 4 K MySQL won't cache query results that are larger than query_cache_limit in size SORT OPERATIONS Current sort_buffer_size = 64 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 5.00 M You have had 106606 queries where a join could not use an index properly You have had 8 joins without keys that check for key usage after each row join_buffer_size >= 4 M This is not advised You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. OPEN FILES LIMIT Current open_files_limit = 20210 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_open_cache = 10000 tables Current table_definition_cache = 2000 tables You have a total of 1910 tables You have 2151 open tables. The table_cache value seems to be fine TEMP TABLES Current max_heap_table_size = 2.92 G Current tmp_table_size = 2.92 G Of 366426 temp tables, 49% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables. TABLE SCANS Current read_buffer_size = 3 M Current table scan ratio = 2846 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 185 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'

    Read the article

  • cannot log into mysql locally

    - by Lostsoul
    When I try to log into mysql locally using the command: mysql -u root -p I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) I can access the server remotely(not as root) and my web pages are using the mysql fine, but locally I cannot log on(which I need because I need to create some users). Only change I made was to attach another drive to the server and move the sql data there. Here's my.cnf [mysqld] datadir=/media/ephemeral0/data/mysql socket=/media/ephemeral0/data/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # adding more config skip-external-locking long_query_time=1 slow_query_log slow_query_log_file=/var/log/log-slow-queries.log log-bin=mysql-bin server-id= 1 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid myisam_recover_options I read I need to edit the socket info in my.cnf to make sure it points to the right socket file..I double checked and the file exists(although it starts with an S when I do ls -l "srwxrwxrwx 1 mysql mysql 0 Jun 21 03:43 mysql.sock"). I'm not really sure how to resolve this. I have tried to reboot and ran yum update to make sure I was running the latest packages. Please help!

    Read the article

  • Find out the type of an automounted device

    - by Steve Bennett
    I'm working on a system (Ubuntu Precise) with a mount defined in /etc/fstab as follows: /dev/vdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 Originally I just wanted to find out if it's NFS (due to potential MySQL locking issues). Judging from man mount, it's not: If no -t option is given, or if the auto type is specified, mount will try to guess the desired type. Mount uses the blkid library for guessing the filesystem type; if that does not turn up anything that looks familiar, mount will try to read the file /etc/filesystems, or, if that does not exist, /proc/filesystems. All of the filesystem types listed there will be tried, except for those that are labeled "nodev" (e.g., devpts, proc and nfs). If /etc/filesystems ends in a line with a single * only, mount will read /proc/filesystems afterwards. But, out of curiosity now, how can I find out more about what type of device it actually is? (For context, this is a VM running on OpenStack. The device is a 60Gb allocation mounted from somewhere - but I don't know how.)` EDIT Including answers here: $ mount /dev/vdb on /mnt type ext3 (rw,_netdev) $ df -T /dev/vdb ext3 61927420 2936068 55845624 5% /mnt

    Read the article

  • FastCGI and Apache 500 error intermittently

    - by benkorn1
    Hello, I have a FastCGI (mod_fastcgi)problem. It happens every once in a while, and does not casue a complete server meltdown, just 500 errors. Here are a couple things. First I am using APC so PHP is in control of it's own processes, not FastCGI. Also, I have the webroot set as: /var/www/html And the fcgi-bin inside: /var/www/html/fcgi-bin First off here is the apache error_log: [Fri Jan 07 10:22:39 2011] [error] [client 50.16.222.82] (4)Interrupted system call: FastCGI: comm with server "/var/www/html/fcgi-bin/php.fcgi" aborted: select() failed, referer: http://www.domain.com/ I also ran strace on the 'fcgi-pm' process. Here is a snip from the trace around the time it bombs out: 21725 gettimeofday({1294420603, 14360}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6503 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 96595}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6154 23*C /var/www/html/fcgi-bin/php.fcgi - - 6483 28*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 270744}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 5741 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 311502}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6064 32*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 365598}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6179 33*C /var/www/html/fcgi-bin/php.fcgi - - 5906 59*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 454405}, NULL) = 0 I noticed that the 'select()' seems to stay the same regardless, however the read() changes its return from 46 to some other number while it is bombing out. Has anyone seen anything like this. Could this be some sort of file locking? Thanks, Ben

    Read the article

  • APC not caching many files

    - by tetranz
    Hello I have a Drupal site running on a VPS at Linode with PHP 5.2.10 and APC 3.1.6. It never caches more than about 25 files and barely uses any of its available memory. Drupal has hundreds of php files. I have another server where APC seems to work well and does indeed cache hundreds of files. The only difference with that site is that it runs Ubuntu 10.04 and php 5.3.2. The config settings are the same. What could be wrong? I'll paste the config from apc.php below. This is after hitting multiple parts of Drupal. Thanks APC Version 3.1.6 PHP Version 5.2.10-2ubuntu6.5 APC Host xxx.example.com Server Software Apache/2.2.12 (Ubuntu) Shared Memory 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking) Start Time 2010/12/02 11:32:17 Uptime 3 minutes File Upload Support 1 File Cache Information Cached Files 21 ( 1.4 MBytes) Hits 169 Misses 21 Request Rate (hits, misses) 1.00 cache requests/second Hit Rate 0.89 cache requests/second Miss Rate 0.11 cache requests/second Insert Rate 0.17 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1

    Read the article

  • Diagnosing SAN connectivity issues (RHEL5)

    - by Matthew
    We are currently utilizing GFS2 to share a SAN LUN between 3 servers. However due to a feature problem with vendor software we are using, we currently have the volume unmounted on two of the boxes, and are instead exporting the GFS2 filesystem via NFS from the first one (the software requires some weird locking mechanics that GFS2 doesn't support). As of this morning, NFS was no longer able to read/write to the volume from any of the servers, including the NFS server. I then tried checking the normal mount (the directory that is exported on the NFS server) and I received a weird input/output error just trying to CD into it. When I tried running multipath, I got a DM error, however multipath -l worked just fine. I tried unmounting the GFS2 volume, and the CLI hung. I ran init 0 which killed most services, but then the shutdown appeared to have been hung. I logged in via out of band access (hp ILO) and saw that the shutdown was hung trying to unmount GFS2 volumes. My main priority was getting the box back online so after about 5 minutes of waiting I did a hard reset. I am now trying to figure out what went wrong. What are the correct logs to investigate? I've never run into SAN issues like this before. The SAN is connected via 2 fibre connections. Any help would really be appreciated. Everything appears to be up and functional now.

    Read the article

  • Hubs/switches taking out switches?

    - by Bart Silverstrim
    Here's the issue...we have a network with a lot of Cisco switches. Someone plugged in a hub on the network, and then we started seeing "weird" behavior; errors in communication between clients and servers, or network timeouts, dropping network connections, etc. It seemed that somehow that hub (or SOHO switch) was particularly freaking out our Cisco 3700 series switches. Disconnect that hub or netgear-type SOHO switch and things settled down again. We're in the process of trying to get a centralized logging server for SNMP and management, etc., to see if we can trap errors or narrow down when someone does this sort of thing without our knowledge because things seem to work, for the most part, without issue, we just get freaky oddball incidents on particular switches that don't seem to have any explanation until we find out someone decided to take matters into their own hands to expand available ports in their room. Without getting into procedure changes or locking down ports or "in our organization they'd be fired" answers, can someone explain why adding a small switch or hub, not necessarily a SOHO router (even a dumb hub apparently caused the 3700's to freak out) sending DHCP request out, will cause issues? The boss said it's because the Cisco's are getting confused because that rogue hub/switch is bridging multiple MAC's/IP's into one port on the Cisco switches and they just choke on that, but I thought their routing tables should be able to handle multiple machines coming into the port. Anyone see that behavior before and have a clearer explanation of what's happening? I'd like to know for future troubleshooting and better understanding that just waving my hand and saying "you just can't".

    Read the article

  • FastCGI and Apache 500 error intermittently

    - by benkorn1
    I have a FastCGI (mod_fastcgi)problem. It happens every once in a while, and does not casue a complete server meltdown, just 500 errors. Here are a couple things. First I am using APC so PHP is in control of it's own processes, not FastCGI. Also, I have the webroot set as: /var/www/html And the fcgi-bin inside: /var/www/html/fcgi-bin First off here is the apache error_log: [Fri Jan 07 10:22:39 2011] [error] [client 50.16.222.82] (4)Interrupted system call: FastCGI: comm with server "/var/www/html/fcgi-bin/php.fcgi" aborted: select() failed, referer: http://www.domain.com/ I also ran strace on the 'fcgi-pm' process. Here is a snip from the trace around the time it bombs out: 21725 gettimeofday({1294420603, 14360}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6503 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 96595}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6154 23*C /var/www/html/fcgi-bin/php.fcgi - - 6483 28*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 270744}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 5741 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 311502}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6064 32*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 365598}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6179 33*C /var/www/html/fcgi-bin/php.fcgi - - 5906 59*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 454405}, NULL) = 0 I noticed that the 'select()' seems to stay the same regardless, however the read() changes its return from 46 to some other number while it is bombing out. Has anyone seen anything like this. Could this be some sort of file locking? Thanks, Ben

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >