Search Results

Search found 6441 results on 258 pages for 'schema compare'.

Page 195/258 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • PHP scripts randomly becoming really slow to respond - Database lockup?

    - by webnoob
    Hi All, I wasn't sure whether to post this here or on stackoverflow, so apologies if its in the wrong place. I have about 7 php scripts running on a centOS VPS. Each of these scripts contacts a game server and processes the logs, with the logs it either does some database queries or sends info back to the game server. I am having an issue where some of the scripts will randomly become REALLY slow to respond and I don't know where to start with my debugging. Each script connects to its own database schema but on the same MySQL server. Each script will do about 4 inserts per second and twice as many select statements on their respective databases. I thought a database lockup may cause the issue but some console messages that are read from the database are sent to the game servers console without issue every 30 seconds, even when the script is slow to responding to other commands. Non of the scripts are using a lot of memory or CPU power. About 0.1% each. I know this information is really vague but I don't know linux very well at all (in fact, top is about my limit) and I really don't know where to start debugging this. Thanks.

    Read the article

  • NAT ports - how do they work?

    - by Davidoper
    I have the following network schema: Computer A: three nics: NIC 1 (eth0): dhcp, public internet NIC 2 (eth1): static 192.168.1.1, gateway for Computer B NIC 3 (eth2): static 192.168.2.1, gateway for Computer C Computer B: static 192.168.1.2, using gateway 192.168.1.1 (NIC 2). Computer C: static 192.168.2.2, using gateway 192.168.2.1 (NIC 3). So I applied this to get NAT working: iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE Every computer can connect to the internet now. I have been applying rules to the main computer (Computer A), like dropping connections to some ports, e.g ssh: iptables -A INPUT -p tcp --dport 22 -j DROP But for instance, now I would like only allow connections for ports 20,21,22,53 and 80 in Computer C, and ignore the outside traffic if it's not related to those ports. The allowed connections should be FROM Computer C to outside, but not from outside to Computer C (I mean - Computer C is not hosting any HTTP or SSH, but it is going to use them as a client). I guess this sould be done like this: iptables -A OUTPUT -i eth2 -o eth0 -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth2 -o eth0 -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT The last rule (dropping any other traffic different from those) is at the end of the configuration, so -A should be working correctly. The thing is... it is not working. If I put the last rule like this: iptables -A FORWARD -i eth2 -o eth0 -j DROP It just drops everything and, for instance, port 21 (previously opened as you can see above) is not either working. Can you tell me what could I have done wrong? I have been struggling with this problem for some time and I am unable to solve it. Thanks!

    Read the article

  • RAID 5 Install on Ubuntu Server 12.04 [closed]

    - by tarabyte
    Environment: Ubuntu Server 12.04, installing from bootable flash drive Error: No root file system is defined. Please correct this from the partitioning menu. I'm trying to set up a personal file server with software RAID 5. I just got three hard drives for this, but haven't found any solid documentation. I'm unsure what the basic way to partition my hard drives is. Can someone upload a screenshot of their "partition disks" screen so that I can compare with mine (attached)? Should I set the bootable flag? Do I need a /home partition? A /boot partition? Should I "Use [my partition] as: Ext4 journaling file system"? Or make that field "physical volume for RAID"? I am an engineer, but I have only a cursory knowledge of all-things-linux. If you know of any good learning resources I'd be happy to hear about those too (that way I don't have to blindly follow deprecated tutorials online). well, image would be here but i don't have a high enough reputation yet (please vote up :)) Thank you, References I've looked into: https://help.ubuntu.com/community/Installation/SoftwareRAID https://help.ubuntu.com/12.04/serverguide/advanced-installation.html http://forevergeeks.com/setup-ubuntu-server-with-raid-5/

    Read the article

  • Read access to Active Directory property (uSNCreated)

    - by Tom Ligda
    I have an issue with read access to the uSNCreated property when doing LDAP searches. If I do an LDAP search with a user that is a member of the Domain Admins group (UserA), I can see the uSNCreated property for every user. The problem is that if I do an LDAP search with a user (UserB) that is not a member of the Domain Admins group, I can see the uSNCreated property for some users (UserGroupA) and not for some users (UserGroupB). When I look at the users in UserGroupA and compare them to the users in UserGroupB, I see a crucial difference in the "Security" tab. The users in UserGroupA have the "Include inheritable permissions from this object's parent" unchecked. The users in UserGroupB have that option checked. I also noticed that the users in UserGroupA are users that were created earlier. The users in UserGroupB are users created recently. It's difficult to quantify, but I estimate the border between creation time between the users in UserGroupA and UserGroupB is about 6 months ago. What can cause the user creation to default to having that security property checked as opposed to unchecked? A while back (maybe around 6 months ago?) I changed the domain functional level from Windows Server 2003 to Windows Server 2008 R2. Would that have had this effect? (I can't exactly downgrade the domain functional level to test it out.) Is this security property actually the cause of the issue with read access to the uSNChanged property on LDAP searches? It seems correlated, but I'm not sure about causation. What I want in the end is for all authenticated users to have read access to the uSNCreated property for all users when doing an LDAP search. I would also be OK if I could grant read access for that property to an AD group. Then I can control access by adding members to the group.

    Read the article

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • PHP/MySQL Performance Testing with Just PHP

    - by Mike Gifford
    I'm trying to diagnose a server where the website is loading very slowly, but unfortunately my client has only provided me with FTP access. I've got FTP access so I can upload PHP scripts, but can't set up any other server side tools. I have access to phpMyAdmin, but not direct access to the MySQL server. It is also unfortunately a Windows server (and we've been a Linux shop for over a decade now). So, if I wan to evaluate MySQL & disk speed performance through PHP on a generic server, what is the best way to do this? There are already tools like: https://github.com/raphaelm/php-benchmark or https://github.com/InfinitySoft/php-benchmark But I'm surprised there isn't something that someone has already set up & configured to just run through and do some basic testing of a server's responsiveness. Every time we evaluate a new server environment it's handy to be able to compare it to an existing one quickly to see if there are any anomalies. I guess I'd just hoped that someone else had written up a script to do this already. I know I have, but that was before Github when there was a handy place to post scraps of code like this. Originally posted in http://stackoverflow.com/questions/12321498/php-mysql-performance-testing-with-just-php but it was recommended that I re-post it here.

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

  • Perl EPIC Not recognising installed CPAN modules

    - by Recc
    Eclipse on a mac, was working fine adding new modules until I Installed Text::CSV_XS which Eclips doesn't recognise as added to @INC For instance use strict; use SOAP::Transport::HTTP; SOAP::Transport::HTTP::CGI->dispatch_to('C2FService')->handle; BEGIN { package C2FService; use vars qw(@ISA); @ISA = qw(Exporter SOAP::Server::Parameters); use SOAP::Lite; sub c2f { my $self = shift; my $envelope = pop; my $temp = $envelope->dataof("//c2f/temperature"); return SOAP::Data->name( 'convertedTemp' => ( ( ( 9 / 5 ) * ( $temp->value ) ) + 32 ) ); } } use SOAP::Transport::HTTP; is marked as error if I comment it out use SOAP::Lite; is in turn marked as an error, not found etc the usual if a module is not installed. Both are installed with CPAN and $ perl -c soap-test.pl post-code-check.pl syntax OK Perl is fine CPAN tests are all pass, the code works, only EPIC lags behind. $ pwd && ls /opt/local/lib/perl5/site_perl/5.12.4/SOAP Client.pod Lite Server.pod Constants.pm Lite.pm Test.pm Data.pod Packager.pm Trace.pod Deserializer.pod SOM.pod Transport Fault.pod Schema.pod Transport.pod Header.pod Serializer.pod Utils.pod And if I have use errors in the start of my files the rest of the source is not error checked..

    Read the article

  • LDAP Authentication woes

    - by Marcelo de Moraes Serpa
    Hello list, I have a local OpenLDAP server with a couple of users. I'm using it for development purposes, here's the ldif: #Top level - the organization dn: dc=site, dc=com dc: site description: My Organization objectClass: dcObject objectClass: organization o: Organization #Top level - manager dn: cn=Manager, dc=site, dc=com objectClass: organizationalRole cn: Manager #Second level - organizational units dn: ou=people, dc=site, dc=com ou: people description: All people in the organization objectClass: organizationalunit dn: ou=groups, dc=site, dc=com ou: groups description: All groups in the organization objectClass: organizationalunit #Third level - people dn: uid=celoserpa, ou=people, dc=site, dc=com objectclass: pilotPerson objectclass: uidObject uid: celoserpa cn: Marcelo de Moraes Serpa sn: de Moraes Serpa userPassword: secret_12345 mail: [email protected] So far, so good. I can bind with "cn=Manager,dc=site,dc=com" and the 12345678 password (the local server password, setup on slapd.conf). However, I would like to bind with any user in under the people OU. In this case, I'd like to bind with: dn: uid=celoserpa, ou=people, dc=site, dc=com userPassword: secret_12345 But I'm getting a "(49) - Invalid Credentials" error everytime. I have tried through CLI tools (such as ldapadd, ldapwhoami, etc) and also ruby/ldap. The bind with these credentials fails with a invalid credentials error. I thought that it could be an ACL issue, however, the ACLs on slapd.conf seem to be right: access to attrs=userPassword by self write by dn.sub="ou=people,dc=site,dc=com" read by anonymous auth access to * by * read I was suspecting that maybe OpenLDAP doesn't compare against userPassword? Or maybe some ACL configuration I am missing that is somehow affecting the read access to userPassword for the specific DN. I'm really lost here, any suggestion appreciated! Cheers, Marcelo.

    Read the article

  • Format & Fresh Install Mac os x snow leopard in mac mini.

    - by sagar
    Hello Every one. I have purchased dvd of Snow leopard 10.6.2. But actually I purchased mac mini with 10.5.7 leopard I tried to install snow leopard 10.6.2. Everything went perfectly. system was installed successfully. But the problem that I faced is as follows. System was installed but my older data remained as it is. ( means installation didn't format every thing - means installation was done on upgrade basis. ) Now, my system works with very low speed. Previous performance of mac mini was double as compare to current upgrade version. Now - my question are as follows. Does upgrade installation causes the performance in specially osx ? ( means anyone faced this kind of problem ? ) Or 10.6.2 snow leopard is heavy weight system for mac mini ? ( 2Ghz Intel core2duo,1GB RAM - is this configuration OK for snow leopard 10.6.2 ? ) Fresh install works better then upgrade in os x ?

    Read the article

  • PostgreSQL 8.4 - Tablespace Optimization

    - by FloE
    I'm currently running a PostgreSQL Database with about 1.5 billion rows / 500 GB of data (including indices). There are several schemata: on for the (read only, irregular changes / updates) 'core-model' and one for every user (about 20 persons). The users can access the core and store data in their own schema, so everything is located in one database. The server runs with CentOS and PostgreSQL 8.4 and is used for scientific studies, exploration etc and is running quite well. These days an upgrade of the DB storage hard disks arrive - all with the same performance as the old ones. I'm looking for the best way to distribute the data on these disks. It would be possible to separate frequently used objects (the core-data) from the user schemata, but I'm not sure if this is really worth the effort. It seems to be a much better idea to move the WAL files (pg_xlog directory) to its own partition. http://www.postgresql.org/docs/8.4/static/wal-internals.html What are your opinions? Are there any tablespace- or partitioning-related performance documentations / benchmarks?

    Read the article

  • Format & Fresh Install Mac OS X Snow Leopard on Mac mini.

    - by sagar
    Hello Every one. I have purchased a DVD of Snow Leopard (Mac OS X 10.6.2) I purchased a Mac mini with Leopard (Mac OS X 10.5.7) I tried to install Mac OS X 10.6.2 Everything went perfectly. System was installed successfully. But the problem that I faced is as follows. System was installed but my older data remained as it is. (means installation didn't format every thing - means installation was done on upgrade basis.) Now, my system works with very low speed. Previous performance of Mac mini was double as compare to current upgrade version. Now - my question are as follows. Does an upgrade installation causes the performance issues in Mac OS X? Or is Snow Leopard too demanding for the Mac mini? ( 2 Ghz Intel Core 2 Duo, 1GB RAM - is this configuration OK for Snow Leopard? ) Does a fresh install work better than an upgrade?

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Nginx Slower than Apache??

    - by ichilton
    Hi, I've just setup 2x identical Rackspace Cloud instances and am doing some comparisons and benchmarks to compare Apache and Nginx. I'm testing with a 3.4k png file and initially 512MB server instances but have now moved to 1024MB server instances. I'm very surprised to see that whatever I try, Apache seems to consistently outperform Nginx....what am I doing wrong? Nginx: Server Software: nginx/0.8.54 Server Port: 80 Document Length: 3400 bytes Concurrency Level: 100 Time taken for tests: 2.320 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 3612000 bytes HTML transferred: 3400000 bytes Requests per second: 431.01 [#/sec] (mean) Time per request: 232.014 [ms] (mean) Time per request: 2.320 [ms] (mean, across all concurrent requests) Transfer rate: 1520.31 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 11 15.7 3 120 Processing: 1 35 76.9 20 1674 Waiting: 1 31 73.0 19 1674 Total: 1 46 79.1 21 1693 Percentage of the requests served within a certain time (ms) 50% 21 66% 39 75% 40 80% 40 90% 98 95% 136 98% 269 99% 334 100% 1693 (longest request) And Apache: Server Software: Apache/2.2.16 Server Port: 80 Document Length: 3400 bytes Concurrency Level: 100 Time taken for tests: 1.346 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 3647000 bytes HTML transferred: 3400000 bytes Requests per second: 742.90 [#/sec] (mean) Time per request: 134.608 [ms] (mean) Time per request: 1.346 [ms] (mean, across all concurrent requests) Transfer rate: 2645.85 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 1 3.7 0 27 Processing: 0 3 6.2 1 29 Waiting: 0 2 5.0 1 29 Total: 1 4 7.0 1 29 Percentage of the requests served within a certain time (ms) 50% 1 66% 1 75% 1 80% 1 90% 17 95% 19 98% 26 99% 27 100% 29 (longest request) I'm currently using worker_processes 4; and worker_connections 1024; but i've tried and benchmarked different values and see the same behaviour on all - I just can't get it to perform as well as Apache and from what i've read previously, i'm shocked about this! Can anyone give any advice? Thanks, Ian

    Read the article

  • The bottlenecks of any computer, what to look for?

    - by WebDevHobo
    Whether it is a laptop or a desktop, any computer is made up of several pieces of hardware that communicate with each other. Sending data back and forth to ensure that the user gets the desired results. I have seen some theoretical stuff on computers & hardware, but I wonder how it all comes together. CPU RAM Graphics Card L1 CACHE L2 CACHE L3 CACHE FSB ... And all other things. Which is the biggest bottle neck? Why would a person not want/need a big value in one of those categories in certain situations? P.S.: when reading the specs of the i5 750 processor, I came across this description: In place of the FSB, one or more high speed, point-to-point buses called Quick Path Interconnect (QPI) are used, formerly known as Common Serial Interconnect Bus or CSI. QPI features higher bandwidth than the traditional FSB and is better suited to system scaling. What is this, and how does it compare to FSB? EDIT: I am not planning to buy a computer at all. The goal of this question is to understand the internal relation of various hardware pieces, their specific functions and how they work together. For instance, I have heard to a somewhat higher-than-usual amount of L2/L3 Cache can help speed up your computer. What's up with saying that? Also I forgot to mention Hard-disk RPM.

    Read the article

  • Choosing a new laptop

    - by chiongms
    I'm looking for a new laptop. I saw few: i) HP ProBook 4321s ii) HP ProBook 6440b iii)Dell Latitude E6410 i found these laptops are still very new..is it? not much comments about them. can anyone help? i doubt how their graphic cards perform compare to each other? ProBook 4321s- Radeon HD4350 ProBook 6440b- Radeon HD4550 Latitude E6410- NVS 3100M most of the time i'll running 3D CAD software, C++ programing...i saw my friend's laptop with radeon HD4350. it's perfectly fulfilling my demand. but i wonder how the other two are? Another thing i doubt is the screen resolution..my current laptop is 1280x800, and i found it comfortable to use. But these two HP only offer 1366x768..will it make any large different? Lastly, is there anyway to estimate how their power consumption is from the spec sheet? well, i would prefer one with longer battery usage time. my current laptop is suck...only last 1 hour even when it's still new..i'm not going to get another like this anymore. Anyone can help me please? Thanks!

    Read the article

  • Why are Microsoft Windows Update taking so long to install?

    - by Mathieu Pagé
    Hi, I have a question that is not related to a problem I have. Just something I'd like to understand. Why are Windows update so long? First Windows Update need to find witch updates you needs and this take about 5 minutes. What is happening behind the scene during those 5 minutes? I would have tought that it would be enough to compare the updates you already have to the complete list of updates or to check the version numbers of a couples files. Then when it comes time to install the upgrades, they're also taking a long time. Some 1 Mb updates takes 2, 3 or 5 minutes to install. What is taking so long. I would have though that it was simply a mater of backup the old file, uncompress the new files, replace the old file. This should be really fast. Is Windows doing something else? For comparison, under Linux, you can find which updates you need in about 20 seconds and installing them is usually pretty fast (The time to uncompress the files). I can do a complete updgrade of my linux machine in about 25 minutes (download 600-800 Mb of updates, hundreds of them and install them) while under windows 25 minutes is the time it needs to find witch update are needed and install about 5-10 updates. I just updated a Windows XP home from SP1a to SP3 + all other updates. It took me more than 3 hours. Doing something like that in the Linux World takes about 30 minutes. I don't want to bash Microsoft here. I genuinly want to know what they do differently that makes it so long.

    Read the article

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

  • Formula to calculate probability of unrecoverable read error during RAID rebuild

    - by OlafM
    I need to compare the reliability of different RAID systems with either consumer or enterprise drives. The formula to have the probability of success of a rebuild, ignoring mechanical problems, is simple: error_probability = 1 - (1-per_bit_error_rate)^bit_read and with 3 TB drives I get 38% probability to experience an URE (unrecoverable read error) for a 2+1 disks RAID5 (4.7% for enterprise drives) 21% for a RAID1 (2.4% for enterprise drives) 51% probability of error during recovery for the 3+1 RAID5 often used by users of SOHO products like Synologys. Most people don't know about this. Calculating the error for single disk tolerance is easy, my question concerns systems tolerant to multiple disks failures (RAID6/Z2, RAIDZ3 and RAID1 with multiple disks). If only the first disk is used for rebuild and the second one is read again from the beginning in case or an URE, then the error probability is the one calculated above squared (14.5% for consumer RAID5 2+1, 4.5% for consumer RAID1 1+2). However, I suppose (at least in ZFS that has full checksums!) that the second parity/available disk is read only where needed, meaning that only few sectors are needed: how many UREs can possibly happen in the first disk? not many, otherwise the error probability for single-disk tolerance systems would skyrocket even more than I calculated. If I'm correct, a second parity disk would practically lower the risk to extremely low values. Am I correct?

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

  • How should I monitor memory usage/performance in SunOS/Solaris?

    - by exhuma
    Last week we decided to add some SunOS (uname -a = SunOS bbs-sam-belair 5.10 Generic_127128-11 i86pc i386 i86pc) machines into our running munin instance. First off, the machines are pre-configured appliances, so, I want to avoid touching the system too much without supervision of the service provider. But adding it to munin was fairly easy by writing a small socket-service (if anyone is interested, I put it up on github: https://github.com/munin-monitoring/contrib/tree/master/tools/pypmmn) Yesterday, I implemented/adapted the required plugins for our machines. And here the questions start: First, I have not found a way to determine detailed memory usage values. I get the total memory by running prtconf | grep Memory, and the free memory using vmstat. Fiddling together a munin-plugin, gives me the following graph: This is pretty much uninformative. Compare this to the default plugin for linux nodes which has a lot more detail: Most importantly, this shows me how much memory is actually used by applications. So, first question: Is it possible to get detailed memory information on SunOS with the default system tools (i.e. not using top)? Onto the next puzzle: Seeing the graphs, I noticed activity in the "Paging in/out" graphs, even though the memory graph still has unused memory: Upon further investigation, I found out that df reports that /tmp is mounted on swap. Drilling around on the web, I understood that df will display swap, but in fact, it's mounted as a tmpfs. Now I don't know if this explains the swap activity. The default munin-plugin for solaris uses kstat -p -c misc -m cpu_stat to get these values. I find it already strange that this is using the cpu_stat module. So maybe I simply misinterpret the "paging" graphs? Second question: Do the paging graphs indicate that parts of the memory are paged to disk? Or is the activity caused by file operations in /tmp?

    Read the article

  • Why is my global security group being filtered out of my logon token?

    - by Jay Michaud
    While investigating the effects of filtered tokens on my file permissions, I noticed that one of my global security groups is being filtered in addition to the regular system-defined filtered groups. My Active Directory environment is a single-domain forest on the Windows Server 2003 functional level. I'll call the domain "mydomain.example.com". I am logged onto a Windows Server 2008 Enterprise Edition machine (not a domain controller) as a member of the "MYDOMAIN\Domain Admins" group and the "MYDOMAIN\MySecurityGroup" global security group (among others). When I run "whoami /groups" from an elevated command prompt, I see the full list of groups to which my account belongs as expected. When I run "whoami /groups" from a regular, non-elevated command prompt, I see the same list of groups, but the following groups are described as "Group used for deny only". BUILTIN\Administrators MYDOMAIN\Schema Admins MYDOMAIN\Offer Remote Assistance Helpers MYDOMAIN\MySecurityGroup Numbers 1 through 3 above are expected based on Microsoft documentation; number 4 is not. The "MYDOMAIN\MySecurityGroup" global security group is a group that I created. It contains three non-built-in global security groups, and these security groups contain only non-built-in user accounts. (That is, I created all of the accounts and groups that are members of the "MYDOMAIN\MySecurityGroup" global security group.) There are other, similar groups of which my account is a member that are not being filtered out of my logon token, and this group is not granted any specific user rights in the security settings of this computer or in Group Policy. What would cause this one group to be filtered out of my logon token?

    Read the article

  • Error importing large MySQL dump file which includes binary BLOBs in Windows

    - by Daniel Magliola
    I'm trying to import a MySQL dump file, which I got from my hosting company, into my Windows dev machine, and i'm running into problems. I'm importing this from the command line, and i'm getting a very weird error: ERROR 2005 (HY000) at line 3118: Unknown MySQL server host '+?*á±dÆ-N+Æ·h^ye"p-i+ Z+-$?P+Y.8+|?+l8/l¦¦î7æ¦X¦XE.ºG[ ;-ï?éµ?º+¦¦].?+f9d릦'+ÿG?-0à¡úè?-?ù??¥'+NÑ' (11004) I'm attaching the screenshot because i'm assuming the binary data will get lost... I'm not exactly sure what the problem is, but two potential issues are the size of the file (2 Gb) which is not insanely large, but it's not trivially small either, and the other is the fact that many of these tables have JPG images in them (which is why the file is 2Gb large, for the most part). Also, the dump was taken in a Linux machine and I'm importing this into Windows, not sure if that could add to the problems (I understand it shouldn't) Now, that binary garbage is why I think the images in the file might be a problem, but i've been able to import similar dumps from the same hosting company in the past, so i'm not sure what might be the issue. Also, trying to look into this file (and line 3118 in particular) is kind of impossible given its size (i'm not really handy with Linux command line tools like grep, sed, etc). The file might be corrupted, but i'm not exactly sure how to check it. What I downloaded was a .gz file, which I "tested" with WinRar and it says it looks OK (i'm assuming gz has some kind of CRC). If you can think of a better way to test it, I'd love to try that. Any ideas what could be going on / how to get past this error? I'm not very attached to the data in particular, since I just want this as a copy for dev, so if I have to lose a few records, i'm fine with that, as long as the schema remains perfectly sound. Thanks! Daniel

    Read the article

  • What to look for in a reliable backup hard disk?

    - by Senthil
    I want to buy an internal hard disk and use a docking station along with it for backing up important data. The size will be around 500GB to 1TB. I have a budget and several models fit into it. So far, they only seem to vary in size, speed and brand. These are the only things I can compare from the specs. I guess asking for which brand is best is completely subjective so I won't do that. I want my disk to have long life and be reliable. Doesn't matter if it is somewhat slow. Size: Should I go for the one with highest size within my budget? Will higher density cause problems? Or should I go for a moderately sized one? Does the number of platters have an impact? Speed: I do not want high performance. I want it to be reliable and last long. I am definitely not going to choose the expensive 10,000 rpm ones. Should I go for 5400 or 7200? Do these numbers affect longevity and reliability? Are there any other technical and objective factors that I should look for?

    Read the article

  • Replicated MongoDB server slower than simple shards

    - by displayName
    I tried to compare the performance of a sharded configuration against a sharded and replicated configuration. The sharded configuration consists of 8 shards each running on three different machines thereby constituting a total of 24 shards. All 8 of these shards run in the same partition on each machine. The sharded and replicated version is 8 shards again just like plain sharding, and all 8 mongods run on the same partition in each machine. But apart from this, each of these three machine now run additional 16 threads on another partition which serve as the secondary for the 8 mongods running on other machines. This is the way I prepared a sharded and replicated configuration with data chunks having replication factor of 3. Important point to note is that once the data has been loaded, it is not modified. So after primary and secondaries have synchronized then it doesn't matter which one i read from. To run the queries, I use an entirely different machine (let's call it config) which runs mongos and this machine's only purpose is to receive queries and run them on the cluster. Contrary to my expectations, plain sharding of 8 threads on each machine (total = 3 * 8 = 24) is performing better for queries than the sharded + replicated configuration. I have a script written to perform the query. So in order to time the scripts, I use time ./testScript and see the result. I tried changing the reading preference for replicated cluster by logging to mongo of config and run db.getMongo().setReadPref('secondary') and then exit the shell and run the queries like time ./testScript. The questions are: Where am i going wrong in the replication? Why is it slower than its plain sharding version? Does the db.getMongo().ReadPref('secondary') persist when i leave the shell and try to perform the query? All the four machines are running Linux and i have already increased the ulimit -n to 2048 from initial value of 1024 to allow more connections. The collections are properly distributed and all the mongods have equal number of chunks. Goes without saying that indices in both configurations are the same.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >