Search Results

Search found 9154 results on 367 pages for 'replication jobs job'.

Page 51/367 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Sober, eye-catchy Color for Job site

    - by Knowledge Craving
    I will be creating one website related to jobs, which has almost all the features of monster & naukri. My client just asked to use any sober yet eye-catchy color(s) for that website. Can anybody please highlight as to what is / are the sober, yet eye-catchy color(s) for these type of websites?

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • best practice to removing DC from Site that no longer connects via vpn in another city

    - by dasko
    hi i am looking for a recap of what i have done already to see if i missed anything. i had two cities connected by wan using a ipsec persistent tunnel between gateways. i had one DC (DOMAIN CONTROLLER) in each city that was a global catalog server (GC) they were set up to replicate and i had them configured under Sites and Servers with their own subnet etc... about 6 months ago the one city was removed and i was not able to gracefully remove, through dcpromo, the server that was there. it is no longer used and cannot be brought back. the company went from two sites down to single site. Problem is i had a whole bunch of kcc errors and replication bugs in the event viewer. i wanted to clean up my active directory and decided to use the ntdsutil metadata cleanup commands. i removed the server from the specifed site based on a procedure from petri website. I then removed the instances of the old DC and site from Sites and Servers. Then i went and cleaned up the DNS by removing Host A records, NS server name from both the local DNS forward lookup zone and the _msdcs i also removed the reverse lookup zone for the subnet that no longer exists. is there anything i missed? thanks in advance for any help. gd

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6x2GHz) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : From the cloud's IO chart, we're averaging 100mb/s read. > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. moved from superuser

    Read the article

  • Force database read to master if slave data is stale

    - by Jeff Storey
    I previously asked a specific question about this database replication for new user signup to which I got an answer, but I want to ask this in the more general sense. I have a database setup in which I am using a master/slave combination. I am using the slaves for load balancing (the data itself is partitioned/sharded across multiple databases, but each database has X slaves for load balancing). Let's say I write some data to the master. Now I do a subsequent read which hits a slave, but the slave has not yet caught up to the master. Is there a way (which can be done quickly since it will happen frequently) to determine if the data is stale in the slave so I can then route to the master? In my previous question, it was suggested to do simultaneous writes to the cache and the database. This solution seems practical, but there is still a chance that the data may have been removed from the cache but not yet updated in the slave. A possible solution is to ensure the cache is big enough (based on the typical application load) so the data will not be evicted within the time frame it takes to replicate the data. This seems like it may be feasible. Can anyone provide additional insight into this question? Thanks!

    Read the article

  • Failover strategy for a 4 servers scenario

    - by Joao Villa-Lobos
    Hi all, I am trying to figure out how to set up replication & failover in a scenario with 4 servers (2 per location) where any server may assume the Master role. My initial scenario is the following one: 2 servers in location A (One Master, One Slave); 2 servers in location B (Two Slaves). For this I'm thinking on using the configuration Master-Master Active-Passive suggested on O'Reilly's "High Performance MySQL" on all of them so each one can become a Master when needed. If the Master "dies" the other server from location A assumes the Master role whenever possible. It will always have a bigger priority then the servers on location B. A server on location B will only switch to Master if no server on location A is able to do so. Since MySQL can't handle this automatically I need some other way to implement this. I've read already about heartbeat and Maatkit. Is this the way to go? Has anyone used this in a similar scenario? Is there some other way to go in order to achieve this? Any pointers about failout will be appreciated. I want to keep this as simple as possible avoiding stuff such as DRDB. I'm not concerned about high availability just a way to switch roles automatically without too many hassle. I'm using SuSe Enterprise 10 and MySQL 5.1.30-community. Thanks in advance, João

    Read the article

  • Can I replicate data between mySQL and SQL Server/SQL Azure?

    - by Ernest Mueller
    I have a replicated mySQL setup running happily on Amazon AWS, making user data available locally in various regions. Now I'm faced with an app that needs to go up on Microsoft Azure and I need to replicate the data over to there as well. So that's annoying. I am faced with several options: Replicate from mySQL to SQL Azure/SQL Server seems like it would be lovely - is this possible? I'd consider using a third party tool and paying $$ if I had to. We're not using anything complicated in the db feature set, it's just data in tables. Get mySQL working on Microsoft Azure - which seems really dicey at best. All the HOWTOs I can find say "this is possible but you really shouldn't try this for production apps." Go non-realtime and do syncs from mySQL to SQL Azure, which may be somewhat expensive and slower. Rip out all my mySQL on Amazon and use SQL Server there, which would make Baby Jesus cry. Has anyone gotten mySQL to SQL Azure/SQL Server replication or syncing working? Or have any other approaches (a NoSQL solution that replicates and might meet our but-we-need-to-join-some-tables needs that can easily be run on Amazon and Azure)?

    Read the article

  • Mysql master-master not replicating

    - by frankil
    I'm setting up a master-master mysql replication on two servers (db1 and db2). I started with setting up db2 as a slave to db1 and that works fine. But when I set up db1 as a slave to db2 it isn't replicating. On the face of it everything looks fine but the data isn't replicating. There are no errors in either of the error logs. The slave status is updating the bin log position. I have used mysqlbinlog to examine both the binlog on the db2 and the relay log on db1 and all of the queries are going in there, but not being executed to db1. "show slave status" on both servers shows that both the slave io and sql threads are "Yes" and that the relay log position is updated by the sql thread. Also on both servers: >echo "show processlist" | mysql | grep "system user" 166819 system user NULL Connect 3655 Waiting for master to send event NULL 166820 system user NULL Connect 3507 Has read all relay log; waiting for the slave I/O thread to update it NULL Relevant config for db1: server-id = 1 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 1 master-host = db2 master-port = 3306 master-user = slaveuser master-password = *** skip-slave-start sync_binlog = 1 binlog-ignore-db=mysql Config for db2 server-id = 2 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 2 master-host = db1 master-port = 3306 master-user = slaveuser master-password = *** sync_binlog = 1 relay-log=mysql-relay-bin binlog-ignore-db=mysql What else can I look for to make sure db1 executes the queries from db2?

    Read the article

  • Question About mk-table-checksum Results

    - by stevenmusumeche
    Hello, I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Good Enough Failover Strategy for DNS / MySQL / Email

    - by IMB
    I've asked and read a lot questions regarding DNS failover but the more I read the more complicated it becomes, some people say it's good enough some say it isn't. No clear answers from what I read. I was wondering if we can set it straight once and for all, at least for the requirements of most websites out there. Right now let's assume the following: We don't need really need load-balancing, what we need is a failover solution. We are running a website based on LAMP on a VPS. We need to make sure that the Web Server, MySQL, Email are always accessible if not 99%. Basically here's my idea and questions about it: Web Server: We need at least one failover server (another VPS on a separate data center). Is DNS Failover via Round Robin good, if not, what's the best? And how do you exactly implement it? How do you make the files you upload/delete on Server A is also on Server B? MySQL: I've only read a brief intro to MySQL replication and I assume that I can replicate Server A to Server B and vice versa on the fly right? So just it case Server A fails and Server B is now running, it will continue to work and replicate to Server A when it becomes available. So in essence Server B is now the primary server, and will later on failover to Server A, should a failure happen again. Email: If we are gonna use DNS Failover, using webmail or relying on emails stored on the server is probably not a good idea right? Since some emails might be on Server A while some might be on Server B? I assume a basic email forwarder to a 3rdparty is good enough (like Gmail for example) to ensure all emails are kept in one place. Here's a basic diagram for a better picture: http://i.stack.imgur.com/KWSIi.png

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

  • What is the best time to set the IP address for a server headed to a server colocation facility?

    - by jim_m_somewhere
    What is the best time to set the IP address for a server? I have a server that I am going to install the OS on and then I am going to send it to a server colocation facility. The server is going to have Internet facing services (www, email, etc.) I can set up a "fake" IP address during install (by fake I mean private as in RFC 1918) and change the "fake" IPs to the real IPs once I set up the colocation service. The other option is to set up the colocation service...wait for them to give me the "real" IPs and use them during the OS install. The ramification are that...if I use "fake" IPs during install...I will have to wait before I set up things like SSL certs. If I wait for IPs from the colocation provider...then I can set up SSL certs that use the "correct" (as in "real") IP addresses...no changes to the certs until they expire. Do the "gotchas" of changing an IP address on a server outweigh the benefits of a quick install? The other danger with using "fake" IPs is that I could make a mistake when I go through the various files to change the IP address to the "live" IP address. Server OS: CentOS 6.2 or CentOS 6.3, 64 bit. Apps: Apache 2.4.X httpd, MySQL 5.X (will eventually use replication)

    Read the article

  • How to schedule dynamic function with cron job?

    - by iBrazilian2
    I want to know how I can schedule a dynamic(auto populated data) function to auto run everyday at saved time? Let's say I have a form that once the button is clicked it sends the data to the function, which the posts the data. I simply want to automate that so that I don't have to press the button. <ul> <?php foreach($Class->retrieveData as $data) { <form method="post" action=""> <li> <input type="hidden" name="name">'.$data['name'].'<br/> <input type="hidden" name="description">'.$data['description'].'<br/> <input type="submit" name="post_data" value="Post"> </li> </form> } ?> </ul> Now, the form will pass the data to the function. if(isset($_POST['post_data'])) // if post_data button is clicked then it runs myFunction() { myFunction(); } myFunction() { $name = $_POST['name']; $description = $_POST['description']; } I tried doing the following but the problem is that Cron Job can only run the whole .php file, and I am retrieving the saved time to run from MySQL. foreach($Class->getTime() as $timeData) { $timeHour = $timeData['timeHour']; $timeMinute = $timeData['timeMinute']; $hourMin = date('H:i'); $timeData = ''.$timeHour.':'.$timeMinute.''; if($hourMin == $timeData) { run myFunction. } } $hourMin is the current hour:minute which is being matched against a saved time to auto run from Mysql. So if $hourMin == $timeData then the function will run. How can I run Cron Job to auto run myFunction() if the $hourMin equals $timeData? So... List 1 = is to be runned at 10am List 2 = is to be runned at 12pm List 3 = is to be runned at 2pm The 10am, 12pm, 2pm is the $timeHour and $timeMinute that is retrieved from MySQL but based on each list id's. EDIT @randomSeed, 1) I can schedule cron jobs. 2) $name and $description will all be arrays, so the following is what I am trying to accomplish. $name = array( 'Jon', 'Steven', 'Carter' ); $description = array( 'Jon is a great person.', 'Steven has an outgoing character.', 'Carter is a horrible person.' ); I want to parse the first arrays from both $name and $description if the scheduled time is correct. In database I have the following postDataTime table +----+---------+----------+------------+--------+ | iD | timeDay | timeHour | timeMinute | postiD | +--------------------------------------+--------+ | 1 | * | 9 | 0 | 21 | |----|---------|----------|------------|--------| | 2 | * | 10 | 30 | 22 | |----|---------|----------|------------|--------| | 3 | * | 11 | 0 | 23 | +----|---------+----------+------------+--------+ iD = auto incremented on upload. timeDay = * is everyday (cron job style) timeHour = Hour of the day to run the script timeMinute = minute of the hour to run script postiD = this is the id of the post that is located in another table (n+1 relationship) If it's difficult to understand.. if(time() == 10:30(time from MySQL postiD = 22)) { // run myFunction with the data that is retrieved for that time ex: $postiD = '22'; $name = 'Steven'; $description = 'Steven has an outgoing character.'; // the above is what will be in the $_POST from the form and will be // sent to the myFunction() } I simply want to schedule everything according to the time that is saved to MySQL as I showed at the very top(postDataTime table). (I'd show what I have tried, but I have searched for countless hours for an example of what I am trying to accomplish but I cannot find anything and what I tried doesn't work.). I thought I could use the exec() function but from what it seems that does not allow me to run functions, otherwise I would do the following.. $time = '10:30'; if($time == time()) { exec(myFunction()); }

    Read the article

  • Are there any Rails Plugins that use JS to poll the server for completion of a background process?

    - by btelles
    I know it might be simple to write some polling JS that asks the Rails application if it has completed a background job. But I wonder if anyone has already packaged up this functionality as a plugin. Basically, I have a few delayed_jobs running, and am looking for a plugin that will poll the server for completion of a particular job, then activate a callback when it finds that the job is complete. Any ideas anyone? Berns

    Read the article

  • How can I sync a database driven website to a different server

    - by tbrandao
    I have a website using cPanel on a dedicated account, I would like to be able to automatically sync the website to a second hosting company or perhaps to a local (in house ) server. Basically this is a type of replication. The website is database driven (MySQL), so Ideally it would sync everything (content, database, emails etc.) , but most importantly is syncing the website files and its database. I'm not so much looking for a fail-over solution as an automatic replication solution, so if the primary site (Server) goes off-line, I can manually bring up the replicated site quickly. I'm familiar with tools like unison and rsync, but most of these only sync file(s) and do not do too well with open database connections.

    Read the article

  • Can learning a new language (Human not programming) help your career?

    - by Omar Kooheji
    I was wondering if anyone had any experience of learning a new language (a human language not a programming one) and whether is has helped you get ahead in your career. I'm assuming that the new language is one other than English which I think is fairly essential to programming as most programming resources seem to be english. The reason I ask is I speak Arabic and was told that it would give me an edge when I was looking for jobs, and yet so far in my current job it's completely irrelevant and in my last job I had it was used but I didn't get any special "appreciation" for speaking it.

    Read the article

  • What causes this error..please run "exec sp_register_custom_scripting 'CUSTOM_SCRIPT', your_script???

    - by larryr
    Configuration SQL 2005 (Server A) replicates to SQL 2008(Server B) which replicates to SQL 2008(Server C). I recently added a column (to Server A) to a replicated table via script & the DDL change replicated to Server B with out a problem. When the DDL change replicated to Server C, I received the error below. 'DDL replication failed to refresh custom procedures, please run "exec sp_register_custom_scripting 'CUSTOM_SCRIPT', your_script, 'EDI from xx', 'table_name_here' "and try again (Source: MSSQLServer, Error number: 21814)' These subscriptions (on Server B to Server C) were created via a script below. **exec sp_addsubscription @publication = N'EDI to XLOCX', @subscriber = N'RXLOCXS-SQLA', @destination_db = N'EDI', @subscription_type = N'Push', @sync_type = N'replication support only', @article = N'all', @update_mode = N'read only', @subscriber_type = 0 exec sp_addpushsubscription_agent @publication = N'EDI to XLOCX (Merge)', @subscriber = N'RXLOCXS-SQLA', @subscriber_db = N'EDI', @job_login = N'ROUSES.COM\RXLOCXSQLREPL', @job_password = N'XPASSWORDX', @subscriber_security_mode = 1, @frequency_type = 4, @frequency_interval = 1, @frequency_relative_interval = 1, @frequency_recurrence_factor = 1, @frequency_subday = 8, @frequency_subday_interval = 1, @active_start_time_of_day =3300, @active_end_time_of_day = 235959, @active_start_date = 20070923, @active_end_date = 99991231, @enabled_for_syncmgr = N'False', @dts_package_location = N'Distributor'** GO So the million dollar question is, why do I get the error 'exec sp_register_custom_scripting 'CUSTOM_SCRIPT', your_script' when I add a column to a table in the EDI to XLOCX publication??? AHIA, LarryR...

    Read the article

  • Data validation between SQL replicated tables

    - by Vikram
    Hi All, I am trying to figure out a way to validate the data in my publisher and subscriber in SQL 2005 replication. I thought of using sp_publication_validation, but it needs db_owner permission and we are not allowed to have it in our company. So I did bit more reading and found out about two other SPs that I think work for me. First one is sp_article_validation, which I plan to run on the publisher. For each article that I call this SP, its gonna give the row count and a checksum. With that info, I intend to call sp_table_validation on the subscriber, passing the row count and checksum from the previous SP, there by validating both tables. What do you guys think? Is this a proven way to validate data in replication? There is very little documentation on these SPs. Here is the link: sp_table_validation - http://msdn.microsoft.com/en-us/library/aa239370(v=sql.80).aspx sp_article_validation - http://msdn.microsoft.com/en-us/library/ms177511(v=SQL.90).aspx Thanks for

    Read the article

  • Writing an efficient cron job script utilizing Zend_Mail_Storage_Imap.

    - by fireeyedboy
    I'm new to the IMAP protocol and Zend_Mail_Storage and I'm writing a small php script for a cron job that should regularly poll an IMAP account and check for new messages, and send an e-mail if new messages have arrived. As you can imagine, I want to only poll the IMAP account for relevant messages, and I only want to send a new e-mail if new messages have arrived since the last polled new message. So I thought of keeping track of the last message I polled with some unique identifier for a message. But I'm a bit uncertain about whether the methods I want to utilize for this do what I expect them to do though. So my questions are: Does the iterator position of Zend_Mail_Storage_Imap actually resemble some IMAP unique identifier for messages, or is it simply only and internal position of Zend_Mail_Storage_Abstract? For instance, if I tell it to seek() to message 5 (which I stored from an earlier session) will it indeed seek to the appropriate message on the IMAP server, even if for instance messages have been deleted since last session? Would keeping track of this latest polled message id in a file suffice for a cron job that, say, polls the account every 5 or 10 minutes? Or is this too naive, and should I be using a database for instance. Or is there maybe a much easier way to keep track of such state with Zend_Mail_Storage_Abstract? Also, do I need to poll every IMAP folder? Or is everything accumulated when I poll INBOX? If you could shed some light on any of these matters, I'ld appreciate it. Thanks in advance.

    Read the article

  • Efficient cron job utilizing Zend_Mail_Storage_Imap.

    - by fireeyedboy
    I'm new to the IMAP protocol and Zend_Mail_Storage and I'm writing a small php script for a cron job that should regularly poll an IMAP account and check for new messages, and send an e-mail if new messages have arrived. As you can imagine, I want to only poll the IMAP account for relevant messages, and I only want to send a new e-mail if new messages have arrived since the last polled new message. So I thought of keeping track of the last message I polled with some unique identifier for a message. But I'm a bit uncertain about whether the methods I want to utilize for this do what I expect them to do though. So my questions are: Does the iterator position of Zend_Mail_Storage_Imap actually resemble some IMAP unique identifier for messages, or is it simply only and internal position of Zend_Mail_Storage_Abstract? For instance, if I tell it to seek() to message 5 (which I stored from an earlier session) will it indeed seek to the appropriate message on the IMAP server, even if for instance messages have been deleted since last session? Would keeping track of this latest polled message id in a file suffice for a cron job that, say, polls the account every 5 or 10 minutes? Or is this too naive, and should I be using a database for instance. Or is there maybe a much easier way to keep track of such state with Zend_Mail_Storage_Abstract? Also, do I need to poll every IMAP folder? Or is everything accumulated when I poll INBOX? If you could shed some light on any of these matters, I'ld appreciate it. Thanks in advance.

    Read the article

  • kick off a map reduce job from my java/mysql webapp

    - by Brian
    Hi guys, I need a bit of archecture advice. I have a java based webapp, with a JPA based ORM backed onto a mysql relational database. Now, as part of the application I have a batch job that compares thousands of database records with each other. This job has become too time consuming and needs to be parallelized. I'm looking at using mapreduce and hadoop in order to do this. However, I'm not too sure about how to integrate this into my current architecture. I think the easiest initial solution is to find a way to push data from mysql into hadoop jobs. I have done some initial research on this and found the following relevant information and possibilities: 1) https://issues.apache.org/jira/browse/HADOOP-2536 this gives an interesting overview of some inbuilt JDBC support 2) This article http://architects.dzone.com/articles/tools-moving-sql-database describes some third party tools to move data from mysql to hadoop. To be honest I'm just starting out with learning about hbase and hadoop but I really don't know how to integrate this into my webapp. Any advice is greatly appreciated. cheers, Brian

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >