Search Results

Search found 10433 results on 418 pages for 'session replication'.

Page 19/418 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to reset mysql's replication settings completely, without reinstalling it?

    - by user38060
    I set up mysql replication by adding references to binlogs, relay logs etc in my.cnf restarted mysql, it worked. I wanted to change it so I deleted all binlog related files including log-bin.index, removed binlog statements from my.cnf restarted server, works set master to '', purge master logs since now(), reset slave, stop slave, stop master. now, to set up replication again, I added binlog statements to the server. But then I hit this problem when restarting with: sudo mysqld (the only way to see mysql's startup errors) I get this error: /usr/sbin/mysqld: File '/etc/mysql/var/log-bin.index' not found (Errcode: 13) Because indeed, this file does not exist! (I deleted it, while trying to set up a new replication system) Hmm, if I change the config line to: log-bin-index = log-bin.index I get a different error: [ERROR] Can't generate a unique log-filename /etc/mysql/var/bin.(1-999) [ERROR] MSYQL_BIN_LOG::open failed to generate new file name. [ERROR] Aborting The first time I set up replication on this system, I didn't need to create this file. I did the same thing - added references to a previously non-existing file, and mysql created it. Same with relay logs, etc. I don't know why mysql insists on trying to read the old folder. Should I just reinstall the whole package again? That seems like overkill. my my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = IP key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP table_cache = 64 sort_buffer =64K net_buffer_length =2K query_cache_limit = 1M query_cache_size = 16M slow_query_log_file = /etc/mysql/var/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes expire_logs_days = 10 max_binlog_size = 100M server-id = 3 log-bin = /etc/mysql/var/bin.log log-slave-updates log-bin-index = /etc/mysql/var/log-bin.index log-error = /etc/mysql/var/error.log relay-log = /etc/mysql/var/relay.log relay-log-info-file = /etc/mysql/var/relay-log.info relay-log-index = /etc/mysql/var/relay-log.index auto_increment_increment = 10 auto_increment_offset = 3 master-host = HOST master-user = USER master-password=PWD replicate-do-db = DBNAME collation_server=utf8_unicode_ci character_set_server=utf8 skip-character-set-client-handshake [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash [myisamchk] key_buffer_size = 16M sort_buffer_size = 8M [mysqlhotcopy] interactive-timeout !includedir /etc/mysql/conf.d/ Update: Changing all the /etc/mysql/var/xxx paths in binlog & relay log statements to local has somehow solved the problem. I thought it was apparmor causing it at first, but when I added /etc/mysql/* rw, to apparmor's config and restarted it, it still couldn't read the full path.

    Read the article

  • Implementing Cluster Continuous Replication, Part 2

    Cluster continuous replication (CCR) helps to provide a more resilient email system with faster recovery. It was introduced in Microsoft Exchange Server 2007 and uses log shipping and failover. configuring Cluster Continuous Replication on a Windows Server 2008 requires different techniques to Windows Server 2003. Brien Posey explains all.

    Read the article

  • Is defragging tough on replication?

    - by Jim
    I've been told that defragging causes the log to grow tremendously. Is this true? If so, is there something better to do than defragging that will not impact the log as much? We are running SQL Server 2005 replicating between 2 sites.

    Read the article

  • Path of Replication

    - by geeko
    I'm currently developing a replication system to keep data in-synch between an arbitrary number of servers. Some of these servers exist in one cluster on one LAN. Others exist somewhere else in the world. I'm wondering what are the pros/cons of different paths that we choose to flow replicated data on between servers? In other words, what are the different strategies to load balance the replication process ?

    Read the article

  • What is the fastest method to restore MySQL replication?

    - by dwhere
    I have a MySQL (5.1) master-slave replication pair and replication to the slave has failed. It failed because the master ran out of disk space and the relay-logs became corrupt. The master is now back online and working properly. Since there is this error in the log the slave process can't simply be restarted. The server has a single 40GB InnoDB database and I would like to know what is the fastest method for getting the slave back in sync to minimize downtime.

    Read the article

  • Replicating A Volume Of Large Data via Transactional Replication

    During weekend maintenance, members of the support team executed an UPDATE statement against the database on the OLTP Server. This database was a part of Transactional Replication, and once the UPDATE statement was executed the Replication procedure came to a halt with an error message. Satnam Singh decided to work on this case and try to find an efficient solution to rebuild the procedure without significant downtime.

    Read the article

  • Designing & Maintaining SQL Server Transactional Replication Environments

    Microsoft IT protects against unplanned Transactional Replication outages and issues by using best practices and proactive monitoring. This results in increased stability, simplified management and improved performance of transactional replication environments. New! SQL Backup Pro 7.2 - easy, automated backup and restoresTry out the latest features and get faster, smaller, verified backups. Download a free trial.

    Read the article

  • Steps to Rename a Subscriber Database for SQL Server Transactional Replication

    I have transactional replication configured in production. The business team has a requirement to rename the subscription database. Is it possible to rename the subscription database and ensure that transactional replication will continue to function as before? If so, how could we achieve this? Get smart with SQL Backup ProPowerful centralised management, encryption and more.SQL Backup Pro was the smartest kid at school. Discover why.

    Read the article

  • Advantage Database Replication

    - by Jon
    I have a client that wants two sites to have the ability to sync databases so information at Site A can be synced with Site B so the two sites can look at the same data. I'm not even sure of the infrastructure required. Would a VPN required to connect the 2 databases or would an internet based database work ie/Site A to InternetDatabase and Site B to InternetDatabase. Each site copies data to it periodically and then the InternetDatabase syncs it and the Sites can then pull data down. My other thought was something like Dropbox. If Site A and Site B use a Dropbox account to sync the ADT files etc can the database at each site then sync with those ADT files? Thanks

    Read the article

  • Data Distribution with SQL Server Replication

    This paper provides a foundation for understanding data replication as well as a discussion of the criteria for selecting an appropriate replication technology. Make working with SQL a breezeSQL Prompt 5.3 is the effortless way to write, edit, and explore SQL. It's packed with features such as code completion, script summaries, and SQL reformatting, that make working with SQL a breeze. Try it now.

    Read the article

  • How to implement Session timeout in Web Server Side?

    - by Morgan Cheng
    I beheld a web framework implementing in-memory session in this way. The session object is added to Cache with timeout. When the time is out, the session is removed from Cache automatically. To protect race condition, each request should acquire lock on given session object to proceed. Each request will "touch" the session in Cache to refresh the timeout. Everything looks fine, until this scenario is discovered. Say, one operation takes a long time, longer than timeout. Another request comes and wait on session lock which is currently hold by the long-time request. Finally, the long-time request is over, it releases the lock. But, since it already takes longer time than timeout, the session object is already removed from Cache. This is obvious because the only request holding the lock doesn't have a chance to "touch" the session object in cache. The second request gets the lock but cannot retrieve the expired Session object. Oops... To fix this issue, the second request has to re-create the Session object. But, this is just like digging a buried dead body from tomb and try to bring it back to life. It causes buggy code. I'm wondering what's the best way to implement timeout in session to handle such scenario. I know that current platform must have good session mechanism. I just want to know the under-the-hood how.

    Read the article

  • What kind of storage with two-way replication for multi site C# application?

    - by twk
    Hi I have a web-based system written using asp.net backed by mssql. A synchronized replica of this system is to be run on mobile locations and must be available regardless of the state of the connection to the main system (few hours long interruptions happens). For now I am using a copy of the main web application and a copy of the mssql server with merge replication to the main system. This works unreliably, and setting the replication is a pain. The amount of data the system contains is not huge, so I can migrate to different storage type. For the new version of this system I would like to implement a new replication system. I am considering migration to db4o for storage with it's replication support. I am thinking about other possible solutions like couchdb which had native replication support. I would like to stay with C#. Could you recommend a way to go for such a distributed environment? PS. Master-Slave replication is not an option: any side must be allowed to add/update data.

    Read the article

  • Change the Session Variable Output

    - by user567230
    Hello I am using Dreamweaver CS5 with Coldfusion 9 to build a dynamic website. I have a MS Access Database that stores login information which includes ID, FullName, FirstName, LastName, Username, Pawword, AcessLevels. My question is this: I currently have session variable to track the Username when it is entered into the login page. However I would like to use that Username to pull the User's FullName to display throughout the web pages and use for querying data. How do I change the session variable to read that when they are not entering their FullName on the login page but only Username and password. I have listed my login information code below if there is any additional information needed please let me know. This is the path for which the FullName values reside DataSource "Access" Table "Logininfo" Field "FullName" I want the FullName to be unique based on the Username submitted from the Login page. I apologize in advance for any rookie mistake I may have made I am new to this but learning fast! Ha. <cfif IsDefined("FORM.username")> <cfset MM_redirectLoginSuccess="members_page.cfm"> <cfset MM_redirectLoginFailed="sorry.cfm"> <cfquery name="MM_rsUser" datasource="Access"> SELECT FullName, Username,Password,AccessLevels FROM Logininfo WHERE Username=<cfqueryparam value="#FORM.username#" cfsqltype="cf_sql_clob" maxlength="50"> AND Password=<cfqueryparam value="#FORM.password#" cfsqltype="cf_sql_clob" maxlength="50"> </cfquery> <cfif MM_rsUser.RecordCount NEQ 0> <cftry> <cflock scope="Session" timeout="30" type="Exclusive"> <cfset Session.MM_Username=FORM.username> <cfset Session.MM_UserAuthorization=MM_rsUser.AccessLevels[1]> </cflock> <cfif IsDefined("URL.accessdenied") AND false> <cfset MM_redirectLoginSuccess=URL.accessdenied> </cfif> <cflocation url="#MM_redirectLoginSuccess#" addtoken="no"> <cfcatch type="Lock"> <!--- code for handling timeout of cflock ---> </cfcatch> </cftry> </cfif> <cflocation url="#MM_redirectLoginFailed#" addtoken="no"> <cfelse> <cfset MM_LoginAction=CGI.SCRIPT_NAME> <cfif CGI.QUERY_STRING NEQ ""> <cfset MM_LoginAction=MM_LoginAction & "?" & XMLFormat(CGI.QUERY_STRING)> </cfif> </cfif>

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • Apache has many PHP session files

    - by PiTheNumber
    # ls /var/lib/php5 | wc -l 7488 # ls -la -rw------- 1 wwwrun www 0 Nov 9 15:30 sess_vtuh671rlafdidfjmgjfu6065p4tfieg -rw------- 1 wwwrun www 0 Nov 12 02:30 sess_vu9pn476oiqbsd20q4s2brt60b9vg90d -rw------- 1 wwwrun www 0 Nov 9 15:07 sess_vuonfs2cqsdiq8ja51ornh6lp5j9mf93 -rw------- 1 wwwrun www 0 Nov 9 16:02 sess_vuutcad8as55il34db3uqhqrsltd4q6o -rw------- 1 wwwrun www 0 Nov 9 23:26 sess_vv2mrv5dnlnts6das4g5jlfldael4l0e -rw------- 1 wwwrun www 44 Nov 9 20:35 sess_vvc0cfjuvk3lqb5m97fv6gsmv6bjhsdk -rw------- 1 wwwrun www 0 Nov 9 10:33 sess_vvq82fhj9lg29gaejemlb2lrk25mqv7d -rw------- 1 wwwrun www 0 Nov 9 20:36 sess_vvtd4ka8rfmcroa34unl06916ubj8sb9 Most of them are empty. There are not so many users on the server so I wonder where those files came from. Is this a problem or how does apache handle those file? Do they get delete automaticly? Could this be caused by a bad PHP file?

    Read the article

  • Proper way to configure ~/.Xsession with a standalone window manager to gracefully end a session

    - by cYrus
    I'm using xdm and my ~/.Xsession looks like this: # <initialization stuff here> exec openbox It works, but I've noticed that when I log out Openbox doesn't gracefully kill all the applications. In particular Google Chrome complains about that. How can I make sure to wait for all processes to exit (just like others configurations: Gnome, KDE, Windows ...)? The only (ugly) solution that I've found involves sleep and kill into ~/.Xsession.

    Read the article

  • what port should I open for mysql master-master replication?

    - by Vanddel
    I have two servers running php5-fpm and a load balancer running nginx, the three servers share /var/www/drupal using nfs. nfs is working correctly. I replicated the two servers' database using mysql master master replication. everything was working fine till I added my iptables rules. In my iptables script, I first drop all chains then I accept the ones I want, other than that there are no other drop statements. I opened port 3306 for mysql replication like this : (the rule is on both servers ) iptables -A INPUT -p tcp -s $ip_Of_Other_Server --dport 3306 -j ACCEPT iptables -A OUTPUT -p tcp -d $ip_Of_Other_Server --sport 3306 -j ACCEPT The problem is, when I run both servers and I try to log in using my account on drupal it doesn't log in although I find a successful log in attempt in drupal logs. When I run only one server of them I can log in normally. when I allow everything in my iptables rules it works normally. I believe there's some port I need to open using iptables for the replication to work correctly but I can't find which one to open.

    Read the article

  • What is the best way to recover from a mysql replication fail?

    - by Itai Ganot
    Today, the replication between our master mysql db server and the two replication servers dropped. I have a procedure here which was written a long time ago and i'm not sure it's the fastest method to recover for this issue. I'd like to share with you the procedure and I'd appreciate if you could give your thoughts about it and maybe even tell me how it can be done quicker. At the master: RESET MASTER; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; And copy the values of the result of the last command somewhere. Wihtout closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master: mysqldump mysq Now you can release the lock, even if the dump hasn't end. To do it perform the following command in the mysql client: UNLOCK TABLES; Now copy the dump file to the slave using scp or your preferred tool. At the slave: Open a connection to mysql and type: STOP SLAVE; Load master's data dump with this console command: mysql -uroot -p < mysqldump.sql Sync slave and master logs: RESET SLAVE; CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98; Where the values of the above fields are the ones you copied before. Finally type START SLAVE; And to check that everything is working again, if you type SHOW SLAVE STATUS; you should see: Slave_IO_Running: Yes Slave_SQL_Running: Yes That's it! At the moment i'm in the stage of copying the db from the master to the other two replication servers and it takes more than 6 hours to that point, isn't it too slow? The servers are connected through a 1gb switch.

    Read the article

  • ASP.NET MVC Session Expiration

    - by Andrew Flanagan
    We have an internal ASP.NET MVC application that requires a logon. Log on works great and does what's expected. We have a session expiration of 15 minutes. After sitting on a single page for that period of time, the user has lost the session. If they attempt to refresh the current page or browse to another, they will get a log on page. We keep their request stored so once they've logged in they can continue on to the page that they've requested. This works great. However, my issue is that on some pages there are AJAX calls. For example, they may fill out part of a form, wander off and let their session expire. When they come back, the screen is still displayed. If they simply fill in a box (which will make an AJAX call) the AJAX call will return the Logon page (inside of whatever div the AJAX should have simply returned the actual results). This looks horrible. I think that the solution is to make the page itself expire (so that when a session is terminated, they automatically are returned to the logon screen without any action by them). However, I'm wondering if there are opinions/ideas on how best to implement this specifically in regards to best practices in ASP.NET MVC.

    Read the article

  • Checking if a session is active

    - by Josh
    I am building a captcha class. I need to store the generated code in a PHP session. This is my code so far: <?php class captcha { private $rndStr; private $length; function generateCode($length = 5) { $this->length = $length; $this->rndStr = md5(time() . rand(1, 1000)); $this->rndStr = substr($rndStr, 0, $this->length); return $rndStr; if(session_id() != '') { return "session active"; } else { return "no session active"; } } } ?> And using this code to check: <?php include('captcha.class.php'); session_start(); $obj = new captcha(); echo $obj->generateCode(); ?> But it doesn't output anything to the page, not even a PHP error. Does someone know why this is? And is there a better way I can check if I've started a session using session_start()? Thanks.

    Read the article

  • User roles - why not store in session?

    - by Phil
    I'm porting an ASP.NET application to MVC and need to store two items relating to an authenitcated user: a list of roles and a list of visible item IDs, to determine what the user can or cannot see. We've used WSE with a web service in the past and this made things unbelievably complex and impossible to debug properly. Now we're ditching the web service I was looking foward to drastically simplifying the solution simply to store these things in the session. A colleague suggested using the roles and membership providers but on looking into this I've found a number of problems: a) It suffers from similar but different problems to WSE in that it has to be used in a very constrained way maing it tricky even to write tests; b) The only caching option for the RolesProvider is based on cookies which we've rejected on security grounds; c) It introduces no end of complications and extra unwanted baggage; All we want to do, in a nutshell, is store two string variables in a user's session or something equivalent in a secure way and refer to them when we need to. What seems to be a ten minute job has so far taken several days of investigation and to compound the problem we have now discovered that session IDs can apparently be faked, see http://blogs.sans.org/appsecstreetfighter/2009/06/14/session-attacks-and-aspnet-part-1/ I'm left thinking there is no easy way to do this very simple job, but I find that impossible to believe. Could anyone: a) provide simple information on how to make ASP.NET MVC sessions secure as I always believed they were? b) suggest another simple way to store these two string variables for a logged in user's roles etc. without having to replace one complex nightmare with another as described above? Thank you.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >