Search Results

Search found 6934 results on 278 pages for 'zero fill'.

Page 90/278 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Workflow with Flash Pro CS6 and FlashDevelop: Using fla and swc to store assets

    - by Arthur Wulf White
    I am using this tutorial: http://www.flashdevelop.org/wikidocs/index.php?title=AS3:FlexAndFlashCS3Workflow In the past older versions of Flash Pro I was able to complete these steps: right-click on the symbol in the Library panel, select "Linkage..." dialog, check "Export for ActionScript" and fill in the symbol name (ie. MySymbol_design or assets.MySymbol_design), do not change the base class (ie. flash.display.MovieClip). Right now, I am stuck at that part. Any hints? What I wish to do is: Use fla for the artist to store assets. Publish to swc Extract the assets in FlashDevelop by creating an instance of their class. ... How is this done in CS6? To clear things up, this is what I see when I right click a Flash symbol:

    Read the article

  • VPN no longer works, saves old password?

    - by nathanvda
    I am not sure if this question is related to 11.10 or gnome 3.2, but the VPN configuration screen has changed, and now the user and password is optional, but there is no way for me to override it. On our VPN we use a token, so I have to enter the password each time. But even if I clear the password, clear the password and the user, there is no way for me to unset the user and password, so I am unable to access the VPN. Because he never asks the password anymore, and each time I return to the VPN configuration window, I see the same setting. Please help. Is there another way to configure the VPN? [Found Quick Fix] Recreate the VPN connection, fill everything in but the password, and this will work the first time.

    Read the article

  • How to safely collect bank account from website?

    - by Alexandru Trandafir Catalin
    I want to collect bank account information from my customers on my website. I'd like to do that trough a form, then I will download it to a PC, print it, and then delete it from the website. Or eventually, send it somewhere external right after the user submitted the form so it never gets stored on the website. The goal is to recieve the payment information without having to ask the customer to print, fill manually, and send it over fax. And accomplish this without having to use an external payment gateway. Thank you, Alex.

    Read the article

  • Advice needed: Software Development [closed]

    - by Hunter McMillen
    I recently graduated from college with a B.S. in Computer Science, and am now currently attending the same college to get an M.S. in Computer Science. I know lots of things about Computer Science and programming but throughout all of my coursework I have never had to develop a single complete application, the projects were always relatively small (~300-500 lines of code). Basically, my overall I am about to have these two degrees and I feel like I don't know anything about software development or design; which doesn't make a whole lot of sense. I am looking for ways to fill in the gaps in my knowledge, I would love people's advice on these questions: 1) How do you design good software? Where do you start? 2) What makes a good software developer? Sorry for the convoluted question, but in my mind it is a convoluted situation. Thanks Edit Thanks everyone for your advice.

    Read the article

  • Need ideas for an algorithm to draw irregular blotchy shapes

    - by Yttermayn
    I'm looking to draw irregular shapes on an x,y grid, and I'd like to come up with a simple, fast method if possible. My only idea so far is to draw a bunch of circles of random sizes very near each other, but at a random distance apart from a more or less central coordinate, then fill in any blank spaces. I realize this is a clunky, inelegant method, hopefully it will give you a rough idea of the kinds of rounded, random blotchy shapesI'm shooting for. Please suggest methods to accomplish this, I'm not so much interested in code. I can noodle that part out myself. Thanks!

    Read the article

  • Get Real Multitasking on Android With These 8 Floating Apps

    - by Chris Hoffman
    Android has decent multitasking, but the missing piece of the puzzle is the ability to have multiple apps on-screen at the same time – particularly useful on a larger tablet. Floating apps fill this need. Floating apps function as always-on-top windows, allowing you to watch videos, browse the web, take notes, or do other things while using another app. They demonstrate how Android’s interface is more flexible than iOS and the Modern UI in Windows. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • How to "cast" from generic List<> to ArrayList

    - by Michael Freidgeim
    We are writing new code using generic List<> , e.g. List<MyClass>.   However we have legacy functions, that are expect ArrayList as a parameter. It is a second time, when I and my colleague asked, how to "cast" generic List<MyClass> to ArrayList. The answer is simple- just use ArrayList constructor with ICollection parameter. Note that it is not real cast, it  copies  references to ArrayList. var list=new List<MyClass>(); //Fill list items ArrayList al=new ArrayList(list);//"cast"-

    Read the article

  • Shifting from no-www to www and browsers' password storage

    - by user1444680
    I created a website having user-registration system and invited my friend to join it. I gave him the link of no-www version: http://mydomain.com but now after reading this and this, I want to shift to www.mydomain.com. But there's a problem. I saw that my browser is storing separate passwords for mydomain.com and www.mydomain.com. So in my friend's browser his password must have been stored for no-www. That means after I shift to www and next time he opens the login page, his browser wouldn't auto-fill the username and password fields and there will also be an extra entry (of no-www) in his browser's database of stored passwords. Can this be avoided? Can I do something that will convey to browsers that www.mydomain.com and mydomain.com are the same website? I already have a CNAME record for www pointing to mydomain.com but it seems that search engines consider CNAME as alias but browsers consider them as different websites, I don't know why.

    Read the article

  • Hot-hot-hot for jobs in Java and mobile software

    - by hinkmond
    It's hot-hot-hot! The market for Java and mobile developers keeps growing hotter, and hotter--so says the latest Dice survey. See: Dice survey says Java & Mobile are tops Here's a quote: The market for mobile developers is expanding faster than the talent pool can adapt, a Dice survey indicates. Software developers in general—as well as Java, mobile software and Microsoft .Net developers in particular—are in short supply today. Those fields represent four of the top five most difficult positions IT managers are looking to fill... ... The New York/New Jersey metro area led the country with 8,871 positions listed... So, if you are looking to get into software development get crackin' in learning Java mobile programming and move to NY or NJ. Let's go Mets! Hinkmond

    Read the article

  • Moving one site in Webmaster Tools to more then one site

    - by Towhid
    I have a Question and Answer site about immigration. now I divided it into 2 sites: mysite.co.uk about immigration to UK mysite.com with sub domains for every country, Like: australia.mysite.com , sweden.mysite.com , ... now I had moved All the content from my first site into .co.uk and .com site and it's sub domains to fill theme. I now that Google will detect my new 2 sites as duplicate of first on and it is very bad for SEO. and I don't think Google webmaster tools has a tool for it. so Please Guide me how to fix this problem.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Complete RESTful API debugging/testing tool

    - by vartec
    I'm looking for the most complete tool, preferably portable GUI or browser plugin to test RESTful API. What I need is: GET/POST/DELETE/PUT support multiple file uploads as fields (multipart/form-data) file uploads as body Extra points for: possibility to save multiple configurations and use them to pre-fill parameters OAuth support nice JSON response formatting Currently I'm using 3 tools: Chrome REST Console extension — My favorite, very nicely done. Has OAuth. However the functionality missing for me is sending file as a body of the request; Cannot send multiple files; Firefox Poster add-on — Quite nice, but the functionality it's missing for file as POST fields parameters; Also cannot send multiple files; cURL — can do anything, but it's quite tedious to use it from command line.

    Read the article

  • Small app structure review

    - by Lorenzo
    Hi, I would be grateful if someone could review following: I have a main Form app. OnLoad it displays with Docstyle=Fill the main menu which is done by user control. If the user selects a choice in that Menu control, it fires an event (with one parameter Choice) which main forms reacts on. If the choice is run the app, it closes the user control (dipose) and call method starting the app. If the choice is to quit, it calls Application.Exit. Is that alright form programmers point of view?

    Read the article

  • How can I reply to email by sending a delivery failure message?

    - by wau
    Occasionally, I receive unwanted emails, particularly low-volume spam, or email messages falsely addressed to me (possibly due to typos). I don't want to spend the energy to write a personal message, but I do want to let the sender know that the message never arrived at its destination, and that resending it will be useless. In my opinion, a delivery error message to the sender will fill that need. So, the question is if there is an easy way to send error messages back to the sender. Is there any email client out there which supports sending delivery failure messages as a reply to incoming email -- ideally in one click? (Or, which can be configured to do so?) Alternatively, what is the best way to set up an environment that allows me to send delivery failure messages without too much effort per message sent?

    Read the article

  • How to register your Office365 30 days trial

    - by ybbest
    In this post, I’d like to show you how to register your Office365 30 days trial. It is extremely easy and the steps are as below: 1. Go to Office365 home page 2. Click on Free Trial and choose the options depending on your business requirement, I will choose Plan E3. 3. Fill in the details and create your trial 4. Choose your access account then click Save and continue. 5. Here is landing page for your Office 365 account.

    Read the article

  • For 2D games, is there any reason NOT to use a 3D API like Direct3D or OpenGL?

    - by Eric Palakovich Carr
    I've been out of hobby Game Development for quite a while now. Back when I did it, most people used Direct Draw to create 2D games. By the time I stopped people were saying OpenGL or Direct3D with an orthogonal projection is just the way to go. I'm thinking about getting back into creating 2D games, in particular on mobile phone but maybe on the XNA platform as well. To make something using OpenGL I'd have a (hopefullly) small learning curve to acclimate myself to 3D development. Is there any reason to skip that and instead work with a 2D framework where I just have a Width x Height frame buffer I need to fill with pixels?

    Read the article

  • Some Adsense domain's ads are causing document.write() statements that remove the html from the page

    - by er1234
    All that is output on the page is the domain name of the advertiser, for example 'www.solar-aid.org'. The rest of the content is stripped, I believe because of a document.write() statement. I'd like to know if this is a common issue or something wrong with our setup. There are three domains causing the issue, which we've blocked from Adsense as a result. solar-aid.org kiva.org grameenfoundation.org Given the type of organizations I think they may be within the default group of 'public service ads' within the Backup Ads setting. If the issue doesn't completely resolve itself soon (one customer of ours complained today, even though I blocked them 5+ days ago), I'll disable public service ads and select the 'fill space with a solid color' option.

    Read the article

  • Drawing of a huge model - How to regain performance?

    - by marc wellman
    I have a huge model I want to draw in my XNA application but because of its size I am experiencing a tremendous loss of performance. The model has about ~50 000 000 edges and has a size on disk of 205 MB in DirectX Format. Please don't ask whether this model has to be that big - yes it has! Is there a way to transfer the model directly to my GPU in order to let the GPU do the drawing like when transferring a VertexBuffer like this: graphicsDevice.Vertices[1].SetSource(_instanceBuffers[i], 0, _sizeofMatrix); because when I try to fill a vertexBuffer with all the vertices I am getting a OutOfMemoryException.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Database IDs

    - by fatherjack
    Just a quick post, mainly to test out the new blog format but related to a question on the #sqlhelp hashtag. The question came from Justin Dearing (@zippy1981) as: So I take it database_id isn’t an ever incrementing value. #sqlhelp When a new database is created it is given the lowest available ID. This either is in a gap in IDs where a database has been dropped or the database ID is incremented by one from the highest current ID if there are no gaps to fill. To see this in action, connect to your sandbox server and try this: USE MASTER GO CREATE DATABASE cherry GO USE cherry GO SELECT DB_ID() GO CREATE DATABASE grape GO USE grape GO SELECT DB_ID() GO CREATE DATABASE melon GO USE melon GO SELECT DB_ID() GO USE MASTER GO DROP DATABASE grape GO CREATE DATABASE kiwi GO USE kiwi GO SELECT DB_ID() GO USE MASTER GO DROP DATABASE cherry DROP DATABASE melon DROP DATABASE kiwi You should get an incrementing series of database IDs as the databases are created until the last one where the new database gets allocated the ID that is missing because one was dropped.

    Read the article

  • Impact of variable-length loops on GPU shaders

    - by Will
    Its popular to render procedural content inside the GPU e.g. in the demoscene (drawing a single quad to fill the screen and letting the GPU compute the pixels). Ray marching is popular: This means the GPU is executing some unknown number of loop iterations per pixel (although you can have an upper bound like maxIterations). How does having a variable-length loop affect shader performance? Imagine the simple ray-marching psuedocode: t = 0.f; while(t < maxDist) { p = rayStart + rayDir * t; d = DistanceFunc(p); t += d; if(d < epsilon) { ... emit p return; } } How are the various mainstream GPU families (Nvidia, ATI, PowerVR, Mali, Intel, etc) affected? Vertex shaders, but particularly fragment shaders? How can it be optimised?

    Read the article

  • Resize Partition with gparted

    - by arian
    I wanted to create more space for Ubuntu on my hard disk in favor of my windows partition. I booted the livecd and resized the ntfs partition to 100gb. Then I wanted to resize my ubuntu (ext4) partition to fill up the created unallocated space. A screenshot of my current disk. (With the livecd there's no 'key' icon after sda6) My first thought was just right click on sda6 ? move/resize ? done. Unfortunately I cannot resize or move the partition. However I can resize the ntfs partition. I guess it is because the extended sda4 partition is locked. I couldn't see an unlock possibility though… So how do I resize the ext4 partition anyway, probably by unlocking the extended partition, but how? Thanks in advance.

    Read the article

  • Half-maximized windows misplaced after switching display

    - by Arild
    I have a laptop which I often plug into a larger screen with another display resolution. Thus I'm frequently changing the screen for the system to use. Also I've recently discovered the advantages of half-maximizing windows when working alternatingly in two or more applications. When changing screens, however the windows basically retain their absolute sizes and positions. So in my case the windows typically have their bottoms outside display, don't fill up half the display, etc. I have to half-maximize the windows again every time I switch screens. I'm thinking this should be subject of a bug report but I find it difficult to know where to file them. Any ideas on what package is responsible?

    Read the article

  • New version of the upgrade slides available

    - by Mike Dietrich
    Sorry for not posting for some weeks now. Our blog admins discovered a bug in the MovableType blog software we are using which prevents direct updates or access to the comments. So if you have commented especially on the VM topic I have read your comments and I’ll approve them as soon as the admin part of MovableType will work again. Besides that Roy and me uploaded a new version of the slides last week: See http://apex.oracle.com/folien and use the keyword “upgrade112” (fill it in into the empty field tagged with Schluesselwort. Thanks for your patience! Mike

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >