Search Results

Search found 1114 results on 45 pages for 'flush'.

Page 2/45 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Bind is updating DNS with wrong resolver info after rndc flush expires

    - by RussH
    I'm running Bind 9 on a small office server (Centos 5) with a local mailserver. There's a domain we need to email - it shows wrong DNS info for the MX servers using the local bind, so an RNDC flush updates this correctly - but then, after some time - it reverts back to the wrong resolver. I would have thought that 'rndc flush' just clears the local cache and pulls all the authoritative info down - so why is it being overwritten (by what seems like the next update)? where do I need to look? Presumably there's some named logging(?) to determine where it's getting the [new or cached] updates from?

    Read the article

  • Cannot flush the DNS on Snow Leopard 10.6.8

    - by Andy Woggle
    I have an absolutely bizarre problem and it has STUMPED me.. I cannot seem to flush the DNS on my Snow Leopard 10.6.8. I have tried sudo dscacheutil -flushcache , i've tried dscacheutil -flushcache and it hasn't worked. I'm building a website, and changed the directories yesterday. Now when I check it on my other machine, it works fine. When I get somewhere from distance to check, it works fine, but it's seemingly stuck in the DNS on this machine as the CSS is not showing. Is there a hard-flush method (if that makes sense)? The two above did not work.

    Read the article

  • Flush all messages in mailbox from Zimbra to another server

    - by Giovanni Lovato
    I have a primary Dovecot + Postfix mail server and a secondary Zimbra 8.0.1 server. The primary server went down for a week and all the incoming messages were delivered to the secondary server which has configured a "catch all" account. Now that the primary server is back online, I'd like to flush all messages on the "catch all" mailbox to the primary server for appropriate delivery to the corresponding user mailbox (and its own rules). Is that possible?

    Read the article

  • limit linux background flush (dirty pages)

    - by korkman
    Background flushing in linux happens when either too much written data is pending (adjustable via /proc/sys/vm/dirty_background_ratio) or a timeout for pending writes is reached (/proc/sys/vm/dirty_expire_centisecs). Unless another limit is being hit (/proc/sys/vm/dirty_ratio), more written data may be cached. Further writes will block. In theory, this should create a background process writing out dirty pages without disturbing other processes. In practice, it does disturb any process doing uncached reading or synchronous writing. Badly. This is because the background flush actually writes at 100% device speed and any other device requests at this time will be delayed (because all queues and write-caches on the road are filled). Is there any way to limit the amount of requests per second the flushing process performs, or otherwise effectively prioritize other device I/O?

    Read the article

  • How to flush the input stream in python?

    - by jinxed_coder
    I'm writing a simple alarm utility in Python. #!/usr/bin/python import time import subprocess import sys alarm1 = int(raw_input("How many minutes (alarm1)? ")) while (1): time.sleep(60*alarm1) print "Alarm1" sys.stdout.flush(); doit = raw_input("Continue (Y/N)?[Y]: ") print "Input",doit if doit == 'N' or doit=='n': print "Exiting....." break I want to flush or discard all the key strokes that were entered while the script was sleeping and only accept the key strokes after the raw_input() is executed.

    Read the article

  • Mysql- "FLUSH TABLES WITH READ LOCK" started automatically

    - by ming yeow
    I would like to understand how this happened. I was running a query that would take a long time, but should not lock up any table. However, my dbs were practically down - it seems like it was being locked up by "FLUSH TABLES WITH READ LOCK" 03:21:31 select type_id, count(*) from guid_target_infos group by type_id 02:38:11 select type_id, count(*) from guid_infos group by type_id 02:24:29 FLUSH TABLES WITH READ LOCK But i did not start this command. can someone tell me why it was started automatically?

    Read the article

  • How to make gpg2 to flush the stream?

    - by Vi
    I want to get some slowly flowing data saved in encrypted form at the device which can be turned off abruptly. But gpg2 seems to not to flush it's output frequently and I get broken files when I try to read such truncated file. vi@vi-notebook:~$ cat asdkfgmafl asdkfgmafl ggggg ggggg 2342 2342 cat behaves normally. I see the output right after input. vi@vi-notebook:~$ gpg2 -er _Vi --batch ?pE??x...(more binary data here)....???-??.... asdfsadf 22223 sdfsdfasf Still no data... Still no output... ^C gpg: signal Interrupt caught ... exiting vi@vi-notebook:~$ gpg2 -er _Vi --batch /tmp/qqq skdmfasldf gkvmdfwwerwer zfzdfdsfl ^\ gpg: signal Quit caught ... exiting Quit vi@vi-notebook:~$ gpg2 < /tmp/qqq You need a passphrase to unlock the secret key for user: "Vitaly Shukela (_Vi) " 2048-bit ELG key, ID 78F446CA, created 2008-01-06 (main key ID 1735A052) gpg: [don't know]: 1st length byte missing vi@vi-notebook:~$ # Where is my "skdmfasldf" How to make gpg2 to handle such case? I want it to put enough output to reconstruct each incoming chunk of input. (Also fsyncing after each output can be benefitial as an additional option). Should I use other tool (I need pubkey encryption).

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

  • How to make gpg2 flush the stream?

    - by Vi
    I want to get some slowly flowing data saved in encrypted form at the device which can be turned off abruptly. But gpg2 seems to not to flush it's output frequently and I get broken files when I try to read such truncated file. vi@vi-notebook:~$ cat asdkfgmafl asdkfgmafl ggggg ggggg 2342 2342 cat behaves normally. I see the output right after input. vi@vi-notebook:~$ gpg2 -er _Vi --batch ?pE??x...(more binary data here)....???-??.... asdfsadf 22223 sdfsdfasf Still no data... Still no output... ^C gpg: signal Interrupt caught ... exiting vi@vi-notebook:~$ gpg2 -er _Vi --batch /tmp/qqq skdmfasldf gkvmdfwwerwer zfzdfdsfl ^\ gpg: signal Quit caught ... exiting Quit vi@vi-notebook:~$ gpg2 " 2048-bit ELG key, ID 78F446CA, created 2008-01-06 (main key ID 1735A052) gpg: [don't know]: 1st length byte missing vi@vi-notebook:~$ # Where is my "skdmfasldf" How to make gpg2 to handle such case? I want it to put enough output to reconstruct each incoming chunk of input. (Also fsyncing after each output can be benefitial as an additional option). Should I use other tool (I need pubkey encryption).

    Read the article

  • Hibernate Session flush behaviour [ and Spring @Transactional ]

    - by EugeneP
    I use Spring and Hibernate in a web-app, SessionFactory is injected into a DAO bean, and then this DAO is used in a Servlet through webservicecontext. DAO methods are transactional, inside one of the methods I use ... getCurrentSession().save(myObject); One servlet calls this method with an object passed. The update seems to not be flushed at once, it takes about 5 seconds to see the changes in the database. The servlet's method in which that DAO's update method is called, takes a fraction of second to complete. After the @Transactional method of DAO is completed, flushing may NOT happen ? It does not seem to be a rule [ I already see it ]. Then the question is this: what to do to force the session to flush after every DAO method? It may not be a good thing to do, but talking about a Service layer, some methods must end with immediate flush, and Hibernate Session behavior is not predictable. So what to do to guarantee that my @Transactional method persists all the changes after the last line of that method code? getCurrentSession().flush() is the only solution? p.s. I read somewhere that @Transactional IS ASSOCIATED with a DB Transaction. Method returns, transaction must be committed. I do not see this happens.

    Read the article

  • RequestFactoryEditorDriver getting edited data after flush

    - by Deanna
    Let me start with I have a solution, but I don't really think it is elegant. So, I am looking for a cleaner way to do this. I have an EntityProxy displayed in a view panel. The view panel is a RequestFactoryEditorDriver only using display mode. The user clicks on a data element and opens a popup editor to edit a data element of the EntityProxy with a few more bits of data than is displayed in the view panel. When the user saves the element I need the view panel to update the display. I ran into a problem because the RequestFactoryEditorDriver of the popup editor flow doesn't let you get to the edited data. The driver uses the passed in context and sends it to the server. The context returned out of flush only allows a Receiver even if you cast it to the type of context you stored in the editor driver in the edit() call. It doesn't appear to send and EntityProxyChanged event either, so I couldn't listen for that and update the display view. The solution I found was to change my domain object persist to return the newly saved entity. Then create the popup editor like this editor.getSaveButtonClickHandler().addClickHandler(createSaveHandler(driver, editor)); // initialize the Driver and edit the given text. driver.initialize(rf, editor); PlayerProfileCtx ctx = rf.playerProfile(); ctx.persist().using(playerProfile).with(driver.getPaths()) .to(new Receiver<PlayerProfileProxy>(){ @Override public void onSuccess(PlayerProfileProxy profile) { editor.hide(); playerProfile = profile; viewDriver.display(playerProfile); } }); driver.edit(playerProfile, ctx); editor.centerAndShow(); Then in the save handler I just fire the context I get from the flush. While this approach works, it doesn't seem right. It would seem I should subscribe to the entitychanged event in the display view and update the entity and the view from there. Also this approach saves the complete entity, not just the changed bits, which will increase bandwidth usage. What I would think should happen, is when you flush the entity it should 'optimistically' update the rf managed version of the entity and fire the entity proxy changed event. Only reverting the entity if something went wrong in the save. The actual save should only send the changed bits. In this way there isn't a need to refetch the whole entity and send that complete data over the wire twice. Is there a better solution?

    Read the article

  • Set Hibernate session's flush mode in Spring

    - by glaz666
    I am writing integration tests and in one test method I'd like to write some data to DB and then read it. @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = {"classpath:applicationContext.xml"}) @TransactionConfiguration() @Transactional public class SimpleIntegrationTest { @Resource private DummyDAO dummyDAO; /** * Tries to store {@link com.example.server.entity.DummyEntity}. */ @Test public void testPersistTestEntity() { int countBefore = dummyDAO.findAll().size(); DummyEntity dummyEntity = new DummyEntity(); dummyDAO.makePersistent(dummyEntity); //HERE SHOULD COME SESSION.FLUSH() int countAfter = dummyDAO.findAll().size(); assertEquals(countBefore + 1, countAfter); } } As you can see between storing and reading data, the session should be flushed because the default FushMode is AUTO thus no data can be actually stored in DB. Question: Can I some how set FlushMode to ALWAYS in session factory or somewhere else to avoid repeating session.flush() call? All DB calls in DAO goes with HibernateTemplate instance. Thanks in advance.

    Read the article

  • Can Django flush its database(s) between every unit test

    - by mikem
    Django (1.2 beta) will reset the database(s) between every test that runs, meaning each test runs on an empty DB. However, the database(s) are not flushed. One of the effects of flushing the database is the auto_increment counters are reset. Consider a test which pulls data out of the database by primary key: class ChangeLogTest(django.test.TestCase): def test_one(self): do_something_which_creates_two_log_entries() log = LogEntry.objects.get(id=1) assert_log_entry_correct(log) log = LogEntry.objects.get(id=2) assert_log_entry_correct(log) This will pass because only two log entries were ever created. However, if another test is added to ChangeLogTest and it happens to run before test_one, the primary keys of the log entries are no longer 1 and 2, they might be 2 and 3. Now test_one fails. This is actually a two part question: Is it possible to force ./manage.py test to flush the database between each test case? Since Django doesn't flush the DB between each test by default, maybe there is a good reason. Does anyone know?

    Read the article

  • Clear / Flush cached memory

    - by TheDave
    I have a small VPS with 6GB RAM hosting a couple of websites. Recently I have noticed that my cached memory size is quite high - see below: Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.2%hi, 0.4%si, 0.0%st Mem: 6113256k total, 5949620k used, 163636k free, 398584k buffers Swap: 1048564k total, 104k used, 1048460k free, 3586468k cached After investigating if there is some method to have this flushed or cleared I stumbled upon a command which is: sync; echo 3 > /proc/sys/vm/drop_caches I read it could be useful to add this to a chron-task/job. Is this method recommended or could this lead to potential problems? The only concern I have is that I use one Magento installation on Memcached - could this have any negative effects on it? I am certainly not a pro therefore I would very much appreciate some expert advise. PS: My VPS runs on CentOS 5 x64 and I have WHM + NGINX installed.

    Read the article

  • How to rename database without first stopping SQL instance to flush connections

    - by John Galt
    Is there a way to force a database into single user mode so a script can be run to rename databases? I find I have to Restart the instance of SQL (to force off any connections from a web app, etc.) and then I can run this script: USE master go sp_dboption MDS, "single user", true go sp_dboption StagingMDS, "single user", true go sp_renamedb MDS, LastMonthMDS go sp_renamedb StagingMDS, MDS go sp_dboption LastMonthMDS, "single user", false go sp_dboption MDS, "single user", false go After this script runs, I can restart IIS for my web app and it can connect to the new production database. All the above works well and we've been doing this for years but now we've upgraded to SQL 2008 and the SQL2008 instance also hosts other databases that support other web apps. So, rather than using a Restart of the whole SQL instance to enable subsequent single-user mode on 2 databases, is there a less intrusive way of accomplishing this? Thanks.

    Read the article

  • What does "Flush the Firewall" mean?

    - by Qasim
    I know this is a real newbie question but what does it mean when someone says they "flushed the firewall". I got locked out of my server a few times due to the enhanced security configuration I had done and when I contacted my server management company, they said both times that they flushed the firewall and I was allowed back in. I hope "flushing the firewall" doesn't mean they reduced the security settings at all.

    Read the article

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

  • Hibernate is persisting entity during flush when the entity has not changed

    - by Preston
    I'm having a problem where the entity manger is persisting an entity that I don't think has changed during the flush. I know the following code is the problem because if I comment it out the entity doesn't persist. In this code all I'm doing is loading the entity and calling some getters. Query qry = em.createNamedQuery("Clients.findByClientID"); qry.setParameter("clientID", clientID); Clients client = (Clients) qry.getSingleResult(); results.setFname(client.getFirstName()); results.setLname(client.getLastName()); ... return results; Later in a different method I do another namedQuery which causes the entity manger to flush. For some reason the client loaded above is persisted. The reason this is a problem is because in the middle of all this, there is some old code that is making some straight JDBC changes to the client. When the entity manger persists the changes made by the straight JDBC are lost. The theory we have at moment is that the entity manger is comparing the entity to the underlying record, sees that it's different, then persists it. Can someone explain or confirm the behavior we're seeing?

    Read the article

  • Is there a way to flush html to the wire in Sinatra

    - by thismatt
    I have a Sinatra app with a long running process (a web scraper). I'd like the app flush the results of the crawler's progress as the crawler is running instead of at the end. I've considered forking the request and doing something fancy with ajax but this is a really basic one-pager app that really just needs to output a log to a browser as it's happening. Any suggestions?

    Read the article

  • What does it mean to flush a socket?

    - by User4748402
    I don't really know much about sockets except how to read and write to them as if they were files. I know a little about using socket selectors. I don't get why you have to flush a socket, what's actually happening there? The bits just hang out somewhere in memory until they get pushed off? I read some things online about sockets, but it's all very abstract and high level. What's actually happening here?

    Read the article

  • flush output in Bourne Shell

    - by n-alexander
    I use echo in Upstart scripts to log things: script echo "main: some data" >> log end script post-start script echo "post-start: another data" >> log end script Now these two run in parallel, so in the logs I often see: main: post-start: some data another data This is not critical, so I won't employ proper synching, but thought I'd turn auto flush ON to at least reduce this effect. Is there an easy way to do that? Update: yes, flushing will not properly fix it, but I've seen it help such situations to some degree, and this is all I need in this case. It's just that I don't know how to do it in Shell

    Read the article

  • how to flush the console buffer?

    - by DoronS
    Hi all, i have some code that run repetedly : printf("do you want to continue? Y/N: \n"); keepplaying = getchar(); in the next my code is running it doesnt wait for input. i found out that getchar in the seconed time use '\n' as the charcter. im gussing this is due to some buffer the sdio has, so it save the last input which was "Y\n" or "N\n". my Q is, how do i flush the buffer before using the getchar, which will make getchar wait for my answer?

    Read the article

  • Memcache won't flush or clear memory

    - by pedalpete
    I've been trying to clear my memcache as I'm noticing the storage taking up almost 30% of server memory when using ps -aux. So I ran the following php code. $memcache = new Memcache; $memcache-connect("localhost",11211); $memcache-flush(); print_r($memcache-getStats()); This results in the output of ( [pid] = 4936 [uptime] = 27318915 [time] = 1255318611 [version] = 1.2.2 [pointer_size] = 64 [rusage_user] = 9.659531 [rusage_system] = 49.770433 [curr_items] = 57864 [total_items] = 128246 [bytes] = 1931734247 [curr_connections] = 1 [total_connections] = 128488 [connection_structures] = 17 [cmd_get] = 170288 [cmd_set] = 128246 [get_hits] = 45464 [get_misses] = 124824 [evictions] = 1009 [bytes_read] = 5607431213 [bytes_written] = 1806543589 [limit_maxbytes] = 2147483648 [threads] = 1 ) This should be fairly basic, but clearly, I'm missing something.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >