Search Results

Search found 30279 results on 1212 pages for 'database drift'.

Page 522/1212 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • MySql Data Loss - post mortem analysis - RackSpace Cloud Server

    - by marfarma
    After a recent 'emergency migration' of a RS cloud server, the mysql databases on our server snapshot image proved to be days out of date from the backup date. And yet files that were uploaded through the impacted webapp had been written to the file system. Related metadata that was written to the database was lost, but the files themselves were backed-up. Once I was able to manually access the mysql data files before the mysql server started (server was configured to start mysql on boot), I was able to see that the update time for ib_logfile1, ib_logfile0 and ibdata1 was days old. As with this poster, mysql data loss after server crash, it's as if some caching controller had told the OS / mysql server that it had committed data that was still in cache, and it was lost instead of flushed. I can't quite wrap my head around how the uploaded files got written but the database data did not. I would have thought that any cache would have flushed system wide, rather than process by process. Any suggestions as to how this might have happened?

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (~2-3k requests per second, running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6 cores) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96

    Read the article

  • Need help making an ODBC MySQL Connection

    - by Andy Moore
    Short Version: How do I connect from PowerShell to an ODBC 5.1 MySQL Driver? I can't seem to find any connection strings that accurately have a "Provider" field for this particular instance. (See bottom of this question for examples/errors) ===== Long Version: I'm not a server guy, and I've been handed the task of setting up PowerGadgets on our network. I have a MySQL server running on a Linux box, that is configured for remote access and has a user defined for remote access as well. On my windows desktop PC, I have PowerGadgets installed. I installed the MySQL ODBC 5.1 connector, and went to Control Panel Data Sources and set up a User DSN connection to the database. The connection, user, and pass seem to be correct because it lists the tables of the database in my windows control panel. Where I'm running into trouble is in 3 places in PowerGadgets: When selecting a data source, I can select "SQL Server". Inputting the servers IP address does not work and I can't get this option to work at all. When selecting a data source, I can select "OleDB". This screen has a wizard on it, that appears to populate all the correct information (including database table names!) for me. "Test Connection" runs great. But if I try to complete the wizard, I get the error "The .NET Framework data provider for OLEDB does not support the MS Ole DB provider for ODBC Drivers." When selecting a data source, I can select "ODBC". This screen does not have a wizard and I cannot figure out a "connection string" that works. Typically it will respond with the error "The field 'Provider' is missing". Googling ODBC connection strings doesn't reveal any examples with a "provider" field and have no idea what to put in here. The connection string (for #2) above contains "SQLOLEDB" as a provider, and upon inputting that value into this connection string I get the same connection error that #2 gets. I believe I can solve my problems by figuring out a connection string for #3 but don't know where to get started. (PowerGadgets also allows for PowerShell support but I believe I will run into the same problem there) == Here's my current PowerShell connection that doesn't work: invoke-sql -connection "Driver={MySQL ODBC 5.1 Driver};Initial Catalog=hq_live;Data Source=HQDB" -sql "Select * FROM accounts" Spits back the error: "Invoke-Sql : An OLE DB Provider was not specified in the ConnectionString. An example would be, 'Provider=SQLOLEDB;'. == Another string that doesn't work: invoke-sql -connection "Provider=MSDASQL.1;Persist Security Info=False;Data Source=HQDB;Initial Catalog=hq_live" -sql "select * from accounts" And the error: The .Net Framework Data Provider for OLEDB (System.Data.OleDb) does not support the Microsoft OLE DB Provider for ODBC Drivers (MSDASQL). Use the .Net Framework Data Provider for ODBC (System.Data.Odbc).

    Read the article

  • SQL Server Restore from Backup, Just primary File Group

    - by bladefist
    Thankfully, this question is just a what-if, and I am not in an emergency right now. But I have created a file group in my database (sql server 2008), and moved some massive data tables over to it. Leaving my websites central tables in the Primary file group. In the event of a restore, can I restore just the primary file group, and have a working database? Or do I have to restore both file groups? I don't want my site down for ages while it restores the 2nd file group.

    Read the article

  • Website Upgrade - Avoid Downtime

    - by nolan.sipos
    I have been requested to investigate how I can reduce the downtime of our website upgrades. We maintain a DNN site with both public facing pages and member only pages. The member only pages are directly linked to our core application database while the public pages are not. Our current process is to redirect website users as soon as the upgrade process begins, which includes Backup of the Prod DB Update Prod DB Update Executables (Application) Upgrade Website Application (If this requires an update) Install Dependencies Upgrade sub systems like communication engine and payment broker Update various configuration files Perform testing of systems Restart all services Allow access to site This process can take from 2 to 8 hours depending on upgrade required, scripts to be run, size of database and number or portals. My initial thoughts are to restrict users to read only pages and any update pages would be unavailable. Could anyone please offer suggestions as to the best practices for what I would think to be a common problem so that we can reduce this down time and if we need infrastructure changes, I can put this to our technical department.

    Read the article

  • SQL log shipping for reporting

    - by Patrick J Collins
    I would like to create a read-only copy of my SQL Server 2008 database on a secondary server for reporting and analysis. I've been testing log shipping, configured to run every 5 minutes or so. Alas, there appears to be a stumbling block, for exclusive access is required on the target database during the restore, which in turn requires killing all active connections. This is far from ideal, especially if a user is in the middle of running a report. Any better suggestions? Edit : I'm doing this on the Express edition.

    Read the article

  • Postfix/Procmail mailing list software

    - by Jason Antman
    I'm looking for suggestions on mailing list software to use on an existing server running Postfix/Procmail. Something relatively simple. requirements: 1 list, < 50 subscribers list members dumped in a certain file by a script (being pulled from LDAP or MySQL on another box) Handles MIME, images, etc. Moderation features No subscription/unsubscription - just goes by the file or database. Mailman is far too heavy-weight, and doesn't seem to play (easily) with Postfix/Procmail. I'm currently using a PHP script that just receives mail as a user, reads a list of members from a serialized array (file dumped on box via cron on the machine with the MySQL database containing members) and re-mails it to everyone. Unfortunately, we now need moderation capabilities, and I don't quite feel like adding that to the PHP script if there's already something out there that does it. Thanks for any tips. -Jason

    Read the article

  • How to correctly use DERIVE or COUNTER in munin plugins

    - by Johan
    I'm using munin to monitor my server. I've been able to write plugins for it, but only if the graph type is GAUGE. When I try COUNTER or DERIVE, no data is logged or graphed. The plugin i'm currently stuck on is for monitoring bandwidth usage, and is as follows: /etc/munin/plugins/bandwidth2 #!/bin/sh if [ "$1" = "config" ]; then echo 'graph_title Bandwidth Usage 2' echo 'graph_vlabel Bandwidth' echo 'graph_scale no' echo 'graph_category network' echo 'graph_info Bandwidth usage.' echo 'used.label Used' echo 'used.info Bandwidth used so far this month.' echo 'used.type DERIVE' echo 'used.min 0' echo 'remain.label Remaining' echo 'remain.info Bandwidth remaining this month.' echo 'remain.type DERIVE' echo 'remain.min 0' exit 0 fi cat /var/log/zen.log The contents of /var/log/zen.log are: used.value 61.3251953125 remain.value 20.0146484375 And the resulting database is: <!-- Round Robin Database Dump --><rrd> <version> 0003 </version> <step> 300 </step> <!-- Seconds --> <lastupdate> 1269936605 </lastupdate> <!-- 2010-03-30 09:10:05 BST --> <ds> <name> 42 </name> <type> DERIVE </type> <minimal_heartbeat> 600 </minimal_heartbeat> <min> 0.0000000000e+00 </min> <max> NaN </max> <!-- PDP Status --> <last_ds> 61.3251953125 </last_ds> <value> NaN </value> <unknown_sec> 5 </unknown_sec> </ds> <!-- Round Robin Archives --> <rra> <cf> AVERAGE </cf> <pdp_per_row> 1 </pdp_per_row> <!-- 300 seconds --> <params> <xff> 5.0000000000e-01 </xff> </params> <cdp_prep> <ds> <primary_value> NaN </primary_value> <secondary_value> NaN </secondary_value> <value> NaN </value> <unknown_datapoints> 0 </unknown_datapoints> </ds> </cdp_prep> <database> <!-- 2010-03-28 09:15:00 BST / 1269764100 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:20:00 BST / 1269764400 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:25:00 BST / 1269764700 --> <row><v> NaN </v></row> <snip> The value for last_ds is correct, it just doesn't seem to make it into the actual database. If I change DERIVE to GAUGE, it works as expected. munin-run bandwidth2 outputs the contents of /var/log/zen.log I've been all over the (sparse) docs for munin plugins, and can't find my mistake. Modifying an existing plugin didn't work for me either.

    Read the article

  • Unable to setup ssh tunnel on mac

    - by prashant
    On my office windows XP laptop I use a program called Bitvise Tunnelier to establish ssh tunnel to a in-house MySQL database. In the Tunnelier program I also need to provide address of corporate http proxy server in order to establish tunnel. On my personal mac laptop, I use Cisco Anywhere client to establish a VPN connection to my corporate network. But i'm unable to establish ssh tunnel to mysql database using ssh. How do I specify the proxy server address in the ssh command? As additional info when i'm using office laptop (whether in home or office) I can successfully ping to the server address specified in the Tunnelier program. But i cannot ping the same server using my mac machine (even after connecting via VPN). So basically i'm unable to understand what's going on and what steps i can take to debug this problem .

    Read the article

  • Simple SQL Server 2005 Replication - "D-1" server used for heavy queries/reports

    - by Ricardo Pardini
    Hello. We have two SQL 2005 machines. One is used for production data, and the other is used for running queries/reports. Every night, the production machine dumps (backups) it's database to disk, and the other one restores it. This is called the D-1 process. I think there must be a more efficient way of doing this, since SQL 2005 has many forms of replication. Some requirements: 1) No need for instant replication, there can be (some) delay 2) All changes (including schemas, data, constraints, indexes) need to be replicated without manual intervention 3) It is used for a single database only 4) There is a third server available if needed 5) There is high bandwidth (gigabit ethernet) available between the servers 6) There isn't a shared storage (SAN) available What would be a good alternative to this daily backup/restore routine? Thanks!

    Read the article

  • getting PHP PDO flavors to work on Mac OS X

    - by Jason S
    I'm running OS X 10.5; it looks like it came with Apache and PHP installed (minus some minor configurations which I turned on per this page; I've used Apache before so I know the basics of how httpd.conf works). I've got a pre-existing script which uses PDO. I've got a MySQL database and can easily configure my script to access the database via PDO MySQL or PDO ODBC. The problem is, that even though I enabled the PDO MySQL and PDO ODBC extensions in php.ini, phpinfo() reports the only PDO drivers are sqlite2 and sqlite. I'm guessing the relevant extension .dll or .so files are not present? How do I get them? note: I'm using the built-in install for PHP. (see apple's page on enabling php, which doesn't say anything about configure or adding additional .so files)

    Read the article

  • Why after deleting a 110+ GB collection, my /var/lib/mongodb directory still have same size?

    - by tunnuz
    I am having some troubles with MongoDB and space usage. In particular, I once used to have a large collection of about 600 million records totaling 110+ GB on disk. Recently I decided to drop it because the data was outdated, to do so I dropped the collection through rockmongo's web interface. Accordingly, rockmongo doesn't show me the collection anymore, however my disk usage hasn't changed at all. Is there any clean operation which I am not aware of, which must be run in order to synchronize the database with database files on disk? I have tried to perform a "repair" but the system complains that there's not enough space on disk ... that's because it is all used by MongoDB.

    Read the article

  • What SDK I should choose on Amazon Web Service to build API on server app?

    - by Nguyen Minh Binh
    I am a newbie on Amazon Web Service. I have a task that setup then build a web service that provide APIs to Mac OS, iOS, Android client. There are some APIs and Database need to be kept in secure. I see that AWS support multiple platform such as Java, .Net, PHP,... It also support many Database Management System. Not yet, there are 2 special SDK for Android and iOS app. So, What should I choose (Java, .Net, PHP,...) to carry out my task? Does AWS support all webservice protocol? Does it support secure webservice? Thanks a lot.

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • How to grow from single server setup

    - by Jenkz
    I'm looking for resources on how to grow our server setup. We currently have one dedicated server with Rackspace in the UK of the following spec: HPDL385_G2_PrevGen HP Single Dual Core Opteron 2214 (2.2Ghz) 4GB RAM 2x 10,000 SCSI Drives in RAID 1 Our traffic is up to 550,000 UVs per month. The site runs off a PHP and MySQL setup. The database gets an absolute hammering, we have many complex queries joining multilpe tables. We are using APC for PHP caching. I'm getting to the stage where I've done as much DB and query optimisation as I can and wonder what the next step should be...... I've looked at memcache, but I've got the impression that his requires a large amount of RAM and ideally a dedicated box.... So is the next step to have two boxes; one for database, one for Apache? Or is there a step I've overlooked. Our load is usually around the 2 mark, but right now it's up at 20!

    Read the article

  • SQL Server Analysis Services 2005 crash when disk is full?

    - by squillman
    One of our SQL boxes ran itself out of disk space last night. This particular server has both the database engine and analysis services on it. Database engine was not happy about having no disk space on the volume where all the data files are, but analysis services just plain died. At least, the only thing I have to blame is the full volume. Has anyone experienced a SSAS that they've been able to directly tie to no disk space? I've got nothing else in the SQL or event logs to blame...

    Read the article

  • Web authentication using LDAP and Apache?

    - by Stephen R
    I am working on a project of setting up a web administered inventory database for my work (or if they don't want it then i'll enjoy learning about it) and hit the problem of allowing only authorized users to access the website (In its testing/development phase, I allow all people to navigate to the website to add entries to the database and query it). I am trying to make it so only particular users in the domain (Active Directory) are allowed to access the website after they are queried about their credentials. I read that Apache (I am using a LAMP server) has a means of asking visitors to the website to provide LDAP credentials in order to gain access to the site, but I wasn't sure if that was exactly what I was looking for. If anyone has experience in the LDAP configurations for Apache that I mentioned or any other means of securely authenticating with websites I would greatly appreciate advice or a direction to go Thank you!

    Read the article

  • How can I centralise MySQL data between 3 or more geographically separate servers?

    - by Andy Castles
    To explain the background to the question: We have a home-grown PHP application (for running online language-learning courses) running on a Linux server and using MySQL on localhost for saving user data (e.g. results of tests taken, marks of submitted work, time spent on different pages in the courses, etc). As we have students from different geographic locations we currently have 3 virtual servers hosted close to those locations (Spain, UK and Hong Kong) and users are added to the server closest to them (they access via different URLs, e.g. europe.domain.com, uk.domain.com and asia.domain.com). This works but is an administrative nightmare as we have to remember which server a particular user is on, and users can only connect to one server. We would like to somehow centralise the information so that all users are visible on any of the servers and users could connect to any of the 3 servers. The question is, what method should we use to implement this. It must be an issue that that lots of people have encountered but I haven't found anything conclusive after a fair bit of Googling around. The closest I have seen to solutions are: something like master-master replication, but I have read so many posts suggesting that this is not a good idea as things like auto_increment fields can break. circular replication, this sounded perfect but to quote from O'Reilly's High Performance MySQL, "In general, rings are brittle and best avoided" We're not against rewriting code in the application to make it work with whatever solution is required but I am not sure if replication is the correct thing to use. Thanks, Andy P.S. I should add that we experimented with writes to a central database and then using reads from a local database but the response time between the different servers for writing was pretty bad and it's also important that written data is available immediately for reading so if replication is too slow this could cause out-of-date data to be returned. Edit: I have been thinking about writing my own rudimentary replication script which would involve something like having each user given a server ID to say which is his "home server", e.g. users in asia would be marked as having the Hong Kong server as their own server. Then the replication scripts (which would be a PHP script set to run as a cron job reasonably frequently, e.g. every 15 minutes or so) would run independently on each of the servers in the system. They would go through the database and distribute any information about users with the "home server" set to the server that the script is running on to all of the other databases in the system. They would also need to suck new information which has been added to any of the other databases on the system where the "home server" flag is the server where the script is running. I would need to work out the details and build in the logic to deal with conflicts but I think it would be possible, however I wanted to make sure that there is not a correct solution for this already out there as it seems like it must be a problem that many people have already come across.

    Read the article

  • SQL Import to update existing records?

    - by Kenundrum
    I've got a table in a database that contains costs for items that gets updated monthly. To update these costs, we have someone export the table, do some magic in excel, and then import the table back to the database. We're running MSSQL 2005 and using the built in SQL Management Studio. The problem is that when importing back into the table, we have to delete all the records before we import or else we'll get errors. The ideal situation would be for the import to recognize the primary keys and then update the records instead of trying to create a second record with a duplicate key- halting the import. The best illustration of the behavior we're trying to get can be found at http://sqlmanager.net/en/products/mssql/dataimport/documentation/hs2180.html the update or insert example. Is something like this possible with the built in tools or do we have to get third party software to make it happen?

    Read the article

  • Selective Disable APC caching

    - by Victor
    I installed APC on my VPS and it works great with W3 Cache wordpress plugin. My problem is that there is one database in MySQL which is pinged by client end every few seconds to see if there are new updates. These db contains certain time sensitive information and hence it can't be part of cached data. How can I disable APC for this database/files? or Can I set a very short expiry of certain type of data? Any help is highly appreciated.

    Read the article

  • Troubleshooting source of heavy resource-usage on a windows server 2008 running multiple sites

    - by batman_man
    Hi, I am running about 10 asp.net websites on a hosted virtual server. The server runs Server 2008 - each website is backed by its own database running on SQL server 2008 on the same box. Lately the box has seemed really slow. The only kind of discovery i could think of doing was looking in the task manager, where i can see w3wp and sqlserver.exe jumping to 40% cpu usage every 5-10 seconds. What are the steps i can take to determine which of my websites is taking these resources and or what database is getting hit the most? I have of course ssms installed on the machine as well. As you can tell, my sysadmin skills are very very limited - any help would be much appreciated.

    Read the article

  • Secure external connection to SQL Server (from third party software)

    - by Bart
    I have a SQL Express 2008 R2 server running on a server in an internal lan network. A few databases are used by some third party software to store data. A SQL-Server user is used by this application to connect to the database. Now I need to access this database using a local installation of the software from an external pc. In this particular case a VPN connection is not the solution I am looking for. I have access to an external linux server, so I tried ssh tunneling from the windows server to the linux server and use the external pc to tunnel it back from the linux server to the client, but this is working very very slow. What are my other options to allow this external connection in a safe way?

    Read the article

  • Powershell and long-running external tools?

    - by leeand00
    I'm trying to compact a MS-Access database using JetComp.exe using a powershell script. Here is the operative lines: # 4. Run JetComp LogWrite("Begin: Running JetComp") .\JETCOMP.EXE -src: $srcDB -dest: $dstDB | Out-Null #Run this command and wait for it to finish... IfErrorExit("Error Compacting Database") LogWrite("End: Running JetComp") The JETCOMP.EXE program seems to complete long before it is actually finished and the $dstDB ends up being smaller than the compact should even make it. Initially ($srcDB) it's about 1.8 GB and by the time the command finishes it's about 300,000 kb (about 0.29 gb) that's a pretty long way off from 1.8 gb which when compacted manually ends up being about 1.6 gb. Is there some sort of timeout I don't know about in powershell scripts? P.S. I know that when running JETCOMP.EXE manually, that the system often detects it as "not responding" even though it's actually getting the job done, and waiting long enough will allow it to complete.

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >