Search Results

Search found 17781 results on 712 pages for 'backup settings'.

Page 128/712 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • How do I backup my C++ Builder component installation?

    - by Gary Benade
    Hi I finally have my c++ Builder 2010 installation the way I want it, with all my components upgraded and installed. (touch wood) I have been working with c++builder since version 1 and I know from countless previous traumatic experiences that this state of affairs could change in an instant. I would like to backup the installation and component set. Is there a way to do this? A tool perhaps? A menu command that I have maybe missed all these years? I don't want to have to reinstall all the components from the bpl source again. I make nightly backup images of my entire drive, I would like to do this for c++builder only if possible. If it's a matter of simply copying files, which files would I need to copy? Are there entries in the registry that would need to be restored? Thanks in advance for any thoughts and suggestions

    Read the article

  • Play! Framework 1.2.4 --- C3P0 settings to avoid Communications link failure do to idle time

    - by HelpMeStackOverflowMyOnlyHope
    I'm trying to customize my C3P0 settings to avoid the error shown at the bottom of this post. It was suggested at this url --- http://make-it-open.blogspot.com/2008/12/sql-error-0-sqlstate-08s01.html --- to adjust the settings as follows: In hibernate.cfg.xml, write <property name="c3p0.min_size">5</property> <property name="c3p0.max_size">20</property> <property name="c3p0.timeout">1800</property> <property name="c3p0.max_statements">50</property> Then create "c3p0.properties" in your root classpath folder and write c3p0.testConnectionOnCheckout=true c3p0.acquireRetryDelay=1000 c3p0.acquireRetryAttempts=1 I've tried to make those adjustments following the direction of the Play! Framework documentation, where they say use "db.pool..." as follows: db.pool.timeout=1800 db.pool.maxSize=15 db.pool.minSize=5 db.pool.initialSize=5 db.pool.acquireRetryAttempts=1 db.pool.preferredTestQuery=SELECT 1 db.pool.testConnectionOnCheckout=true db.pool.acquireRetryDelay=1000 db.pool.maxStatements=50 Are those settings not going to work? Should I be trying to set them in a different way? With those settings I still get the error shown below, that is due to to long of a idle time. Complete Stack Trace of Error: 23:00:44,932 WARN ~ SQL Error: 0, SQLState: 08S01 2012-04-13T23:00:44+00:00 app[web.1]: 23:00:44,932 ERROR ~ Communications link failure 2012-04-13T23:00:44+00:00 app[web.1]: 2012-04-13T23:00:44+00:00 app[web.1]: The last packet successfully received from the server was 274,847 milliseconds ago. The last packet sent successfully to the server was 7 milliseconds ago. 2012-04-13T23:00:44+00:00 app[web.1]: 23:00:44,934 ERROR ~ Why the driver complains here? 2012-04-13T23:00:44+00:00 app[web.1]: com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.Connection was implicitly closed by the driver. 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.Util.handleNewInstance(Util.java:407) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.Util.getInstance(Util.java:382) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1013) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.throwConnectionClosedException(ConnectionImpl.java:1213) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.getMutex(ConnectionImpl.java:3101) 2012-04-13T23:00:44+00:00 app[web.1]: at com.mysql.jdbc.ConnectionImpl.setAutoCommit(ConnectionImpl.java:4975) 2012-04-13T23:00:44+00:00 app[web.1]: at org.hibernate.jdbc.BorrowedConnectionProxy.invoke(BorrowedConnectionProxy.java:74) 2012-04-13T23:00:44+00:00 app[web.1]: at $Proxy49.setAutoCommit(Unknown Source) 2012-04-13T23:00:44+00:00 app[web.1]: at play.db.jpa.JPAPlugin.closeTx(JPAPlugin.java:368) 2012-04-13T23:00:44+00:00 app[web.1]: at play.db.jpa.JPAPlugin.onInvocationException(JPAPlugin.java:328) 2012-04-13T23:00:44+00:00 app[web.1]: at play.plugins.PluginCollection.onInvocationException(PluginCollection.java:447) 2012-04-13T23:00:44+00:00 app[web.1]: at play.Invoker$Invocation.onException(Invoker.java:240) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job.onException(Job.java:124) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job.call(Job.java:163) 2012-04-13T23:00:44+00:00 app[web.1]: at play.jobs.Job$1.call(Job.java:66) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.FutureTask.run(FutureTask.java:166) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) 2012-04-13T23:00:44+00:00 app[web.1]: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) 2012-04-13T23:00:44+00:00 app[web.1]: at java.lang.Thread.run(Thread.java:636) 2012-04-13T23:00:44+00:00 app[web.1]: Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

    Read the article

  • Need to store and backup HD vid files. But need to access them alot.

    - by Mike
    I've had my 700 Gb HDD ever since I bought my computer so partitioning it is out of the question. What I need is a place to keep my HD vid files so when I edit, I don't get a long load time in the editing software. But I also need to keep a back-up of all my other important files which I haven't been doing. Should I buy an additional internal drive JUST for vid files and buy an external for backup of all my files? What are my best options?

    Read the article

  • Can you recover from a backup with bad blocks?

    - by Macbook-Recovery
    The hard drive in my Macbook recently gave up while using it on the plane (dual prop, lots of vibration unfortunately). I have a backup of its contents from a few weeks ago, but there are files that aren't included in it that I would like to recover. As it stands right now, I have it plugged to my macbook by USB. Snow leopard recognizes it, but can't mount it. Therefore, tools like Diskwarrior and Techtools do not work. I started doing a clone of it with Data Rescue 3, but after 7 hours of activity (20% through the drive), it has copied 130 GB of the drive but reports all of the data as "bad blocks". My question is this: Is any data recoverable if the clone is completely composed of bad blocks?

    Read the article

  • How to backup/restore OSX Parental Controls before/after complete reimage?

    - by Jim Anderson
    We typically "nuke and pave" users Mac OSX laptops if they have software issue. Prior to doing so, we backup the primary (non-admin) user's home folder. Our standard image has four accounts: Admin (uber admin user); Parent (admin account for the parents of students); Loaner (so our standard image will also work for our loaner laptop pool); Student (this is the primary, non-admin user of the laptop) Our standard image has only minimal Parental controls on the Loaner and Student accounts. Some parents choose to tighten the parental controls. We never know when parents have made changes to parental controls, or what those changes are. Once we have reimaged the machine with our standard image (minimal parental controls) we would like to be able to restore any custom parental controls parents may have placed on their student's account. Any help in this would be appreciated. Thanks.

    Read the article

  • How do I restore a database on a remote SQL server 2005 from a local backup?

    - by MatsT
    I have been given access to (parts of) a remote SQL Server 2005 with SQL Server authentication in order to be able to make changes to a database without involving other people who is not working on the project. The database have been created on my local machine. Is there any way to restore the remote database from a backup file on my local computer? I do not currently have access to the filesystem on the remote server. EDIT: To clarify, the access I have is that i can log in to the server via the SQL Server Management Studio. I have one connection to my local database server and one connection to the remote server. What I basically want to do is copy the database from one connection to the other.

    Read the article

  • How do I restore a database on a remote SQL server 2005 from a local backup?

    - by MatsT
    I have been given access to (parts of) a remote SQL Server 2005 with SQL Server authentication in order to be able to make changes to a database without involving other people who is not working on the project. The database have been created on my local machine. Is there any way to restore the remote database from a backup file on my local computer? I do not currently have access to the filesystem on the remote server. EDIT: To clarify, the access I have is that i can log in to the server via the SQL Server Management Studio. I have one connection to my local database server and one connection to the remote server. What I basically want to do is copy the database from one connection to the other.

    Read the article

  • If I re-key a SSL certificate for a 2nd/backup server, does the original still work?

    - by Matt
    We have a production server with a wildcard SSL certificate. I'm in the process of creating a backup/failover server that will host the same domains, and therefore will also need the SSL certificate. The certificate on the primary server was installed with the private key non-exportable, so I am unable to export the certificate for installation on the failover server. My question then is - if I re-key the certificate from Go Daddy, does the original certificate installed on the primary server cease to be valid? As an aside, the original (primary) server is IIS 6, the failover is IIS 7 (once the failover is operational, we'll likely upgrade the primary).

    Read the article

  • SQL Server 2000 need to prevent logons whilst performing a backup for a side by side migration

    - by pigeon
    I'm looking for a way to prevent logons from occurring in order to take a full backup of a Database to migrate from its current SQL Server 2000 instance to a new SQL 2005 instance. A friend of mine suggested running a script which would put the DB into a rollback state. Not being a DBA my DDL is very poor and running a script that I don't understand may not be the best idea. One option which might be easier is to simply detach and copy, to the new server. Any suggestions would be greatly appreciated.

    Read the article

  • My website is infected, I restored a backup of the uninfected files, how long will it take to un-mark as dangerous?

    - by Cyclone
    My website www.sagamountain.com was recently infected by a malware distributor (or at least I think it may have been). I have removed all external content, google ads, firefly chat, etc. I uploaded a backup from a few weeks ago, when there was no issue. I patched the SQL injection hole. Now, how long will it take to unmark it as dangerous? Where can I contact google? I am not sure if this is the right place to post it, but since it may have been a server issue I may as well. Can sites inject base64 code via a virus on the whole server, or is it only via sql injection? Thanks for the help, viruses freak me out. Is there an online virus scanner that can scan my page and tell me what is wrong?

    Read the article

  • Should I anticipate any problems trying to use the same SSL Cert on 2 computers (primary, backup)?

    - by Matt
    We have a production machine running IIS6 with a wildcard SSL certificate. The certificate that was installed is not exportable. We want to upgrade the system to IIS7. As part of this venture, we're creating a backup/failover server that will serve the exact same websites - when we take the primary down for upgrade, the secondary will take over. As such, the secondary also needs the SSL certificate. However, since the certificate was not exportable, this means re-keying it from Go Daddy. Per http://help.godaddy.com/article/867, I know that by re-keying the certificate the original will stop working. I'm still pretty new to SSL certificates, so are there any problems I should anticipate when installing the same SSL certificate on 2 different machines?

    Read the article

  • How to extract .raw backup of Windows to a partition?

    - by JamerTheProgrammer
    I have a backup of a Windows 7 drive (virtualbox install) made in .raw format and I want to extract it to my empty partition ready for Windows. Im using OSX. Any ideas? I have tried this: sudo dd if=/Volumes/DATA/bootcamp.raw of=/dev/disk0s6 Which works fine but when I reboot (im on a hackintosh so im using the Chameleon boot loader) I get the normal Chameleon boot menu but with an unknown GPT partition (thats what its called) and If I select that it says: Missing Operating system. Is the MBR broken on that partition?

    Read the article

  • iPhone 3G backup encryption? I've never entered a password?

    - by Lewis
    I can't unclick or access my backup iPhone encrypted file. For the life of me I can not remember ever entering a password for the encrypted iPhone backups. I've tried every password I've used or use and nothing is working. I'm not getting anywhere with long searches online. Can anyone here help? iPhone 3.1.2 iTunes 9.1.1 Mac OSX 10.5.8 Please help, how do I get my iPhone backed up from my 'locked' file I've never locked?

    Read the article

  • How to Add Your Gmail Account to Outlook 2013 Using IMAP

    - by Lori Kaufman
    If you use Outlook to check and manage your email, you can easily use it to check your Gmail account as well. You can setup your Gmail account to allow you to synchronize email across multiple machines using email clients instead of a browser. We will show you how to use IMAP in your Gmail account so you can synchronize your Gmail account across multiple machines, and then how to add your Gmail account to Outlook 2013. To setup your Gmail account to use IMAP, sign in to your Gmail account and go to Mail. Click the Settings button in the upper, right corner of the window and select Settings from the drop-down menu. On the Settings screen, click Forwarding and POP/IMAP. Scroll down to the IMAP Access section and select Enable IMAP. Click Save Changes at the bottom of the screen. Close your browser and open Outlook. To begin adding your Gmail account, click the File tab. On the Account Information screen, click Add Account. On the Add Account dialog box, you can choose the E-mail Account option which automatically sets up your Gmail account in Outlook. To do this enter your name, email address, and the password for your Gmail account twice. Click Next. The progress of the setup displays. The automatic process may or may not work. If the automatic process fails, select Manual setup or additional server types, instead of E-mail Account, and click Next. On the Choose Service screen, select POP or IMAP and click Next. On the POP and IMAP Account Settings enter the User, Server, and Logon Information. For the Server Information, select IMAP from the Account Type drop-down list and enter the following for the incoming and outgoing server information: Incoming mail server: imap.googlemail.com Outgoing mail server (SMTP): smtp.googlemail.com Make sure you enter your full email address for the User Name and select Remember password if you want Outlook to automatically log you in when checking email. Click More Settings. On the Internet E-mail Settings dialog box, click the Outgoing Server tab. Select the My outgoing server (SMTP) requires authentication and make sure the Use same settings as my incoming mail server option is selected. While still in the Internet E-mail Settings dialog box, click the Advanced tab. Enter the following information: Incoming server: 993 Incoming server encrypted connection: SSL Outgoing server encrypted connection TLS Outgoing server: 587 NOTE: You need to select the type of encrypted connection for the outgoing server before entering 587 for the Outgoing server (SMTP) port number. If you enter the port number first, the port number will revert back to port 25 when you change the type of encrypted connection. Click OK to accept your changes and close the Internet E-mail Settings dialog box. Click Next. Outlook tests the accounts settings by logging into the incoming mail server and sending a test email message. When the test is finished, click Close. You should see a screen saying “You’re all set!”. Click Finish. Your Gmail address displays in the account list on the left with any other email addresses you have added to Outlook. Click the Inbox to see what’s in your Inbox in your Gmail account. Because you’re using IMAP in your Gmail account and you used IMAP to add the account to Outlook, the messages and folders in Outlook reflect what’s in your Gmail account. Any changes you make to folders and any time you move email messages among folders in Outlook, the same changes are made in your Gmail account, as you will see when you log into your Gmail account in a browser. This works the other way as well. Any changes you make to the structure of your account (folders, etc.) in a browser will be reflected the next time you log into your Gmail account in Outlook.     

    Read the article

  • Make your TSQL easier to read during a presentation

    - by Jonathan Allen
    SQL Server Management Studio 2012 has some neat settings that you can use to help your presentations at a SQL event better for the attendees if you are willing to spend a few minutes making some settings changes. Historically, I have been reluctant to make changes to my SSMS settings as it is such a tedious process and it’s not 100% clear that what you think you are changing is actually what gets changed. With SSMS 2012 this has become a lot easier and a lot less risky. In any session that involves TSQL there is a trade off between the speaker having all the code on screen and the attendees being able to read any of what is on screen. You (the speaker) might be able to read this when you are working on the code but plenty of your audience wont be able to make head or tail of it. SSMS 2012 has a zoom facility that can help: but don’t go nuts … Having the font too big means you will be scrolling a lot and the code will again be rendered unreadable. There is more though but you need to take a deep breath and open the Tools menu and delve into the SSMS options. In previous versions of SSMS this is a deep, dark and scary place where changing values can be obscure and sometimes catastrophic to the UI when you get back to the code editor. First things first, we set out as a good DBA and save our current (and presumably acceptable) SSMS configuration. From the import and Export Settings you can set up a file to hold all of the settings that you currently have. The wizard will open and ask you to pick an option. This time around choose to export settings. hit next and next again and then name your settings profile in the final step of the wizard and then click Finish. Once this is done then you can change whatever you like and always get back to this configuration in a couple of clicks. So what can you change to make for a good experience? Well there are plenty of things that can be altered but don’t go too mad and change too many things without taking a look at the results for every item on the list above you can change font, size, weight, colour, background colour etc. etc. but consider what you are trying to achieve and take it slowly. I have seen presenters with their settings set to have a yellow highlight and black font rather than the default pale blue background and slightly darker font so to achieve that select Text Editor and then select “Selected Text” in the Display Items listbox. As you change things the Sample area give you an idea of what effect you are going to have. Black and yellow is the colour combination with the highest contrast – that’s why bees and wasps# are that colour. What next? how about increasing the default font for your demo scripts? This means that any script you open and any new ones that you start will take on this font. No more zooming (or forgetting to) in the middle of sessions. now don’t forget to save this profile – follow the same steps as above but give the profile a different name, something like PresentationBigFontHighContrast might be appropriate. Once you are done making changes, export the settings once more and then go into the Import Export wizard and import settings from the first profile you created. Everything will be back to normal. Now making changes to suit your environment can be done very easily and with confidence. * – and warning tape and safety signs and so forth – Health and Safety officers simply copy nature!

    Read the article

  • Asus P8P67 Rev. 3.1 Motherboard issues powering on and saving settings

    - by Scott
    Edit: New Information Have some updated information from the old question below: So basically my issue right now is somewhat similar, but I've been able to rule out a couple of things. I don't think this has anything to do with light on the motherboard. No matter what lights are on/off on the motherboard when the computer is off, they don't affect this issue. The main power LED on the Mobo is always lit when the power supply is turned on, and that's what matters anyway. Even when the main power LED is on, the PC will NOT boot up the first time I hit the power switch. I have to go reset the power supply (make all lights turn off on the Mobo and back on), and THEN hit the power switch. Then everything boots up. Also, the BIOS settings are reset every time this happens. Asus Tech Support told me to try jumping the power with something metal to try and rule out that it's a problem with the connectors getting power, or if it's a problem with the case power switch pins - haven't done that yet though. Any ideas? This is a lot simpler than it was before when I thought it had to do with certain LED indicators for RAM, EPU, etc. Original Question So I built my new desktop just about 3 weeks ago. I've been having a few issues which I think are all related to my motherboard, an Asus P8P67 Revision 3.1, but I'm not 100% sure as this is really the first from-scratch build I've ever done. I've posted these questions on the Asus forums, Asus Tech Support, and the Corsair forums as well as I thought it might have something to do with my power supply at one point. None of these avenues have solved my issue until now completely, so I thought I'd come here to see what you guys think. Here's what's happening: My computer is off, and I go to power it on. I press the power switch on the case (Antec Nine Hundred), and nothing seems to happen. Upon further inspection, I see that what this actually does is simply turn on the EPU LED on my motherboard, but doesn't actually boot anything up. I then have to go and flip the main power switch on the power supply off and back on. What this does is turn off all lights on the Motherboard after a few seconds, and turn them all back on (including the EPU LED that was off before I hit the power switch the first time). Now, hitting the power switch works. The machine boots up fine, and starts going through the boot up process. As a side note: My Motherboard is set to "Force BIOS", and every single time I change this to do the opposite, the next time my computer boots up that change reverts itself. I think this may be due to the fact that I am doing the hard reset on the power supply each time, but I'm not sure. I had thought that the Motherboard would keep its BIOS settings unless you did something to the Mobo itself - so this may be a related issue, or something else completely. That's basically it. Once it's on, it's on. It works fine, recognizes all of my hardware, and runs great. All fans/lights in the case work great, and I'm getting standard readings. The next time I go to shut the computer down however, I can expect the same exact process getting it up and running, including being forced to go into BIOS and exit again before I can load Windows. Another side note: If I power on my computer using the power switch DIRECTLY after shutting it down, it powers right back on (I think this is because the EPU LED light doesn't have time to turn off). It looks as if as long as the EPU LED is lit up on the motherboard before I hit the power switch on the case, the thing will boot up fine (although this doesn't explain the "Force BIOS" issue, at least it's something). Any ideas? Thanks guys. P.S. - System Specs Asus P8P67 Rev. 3.1 Motherboard Intel Core i7 2600K Processor 16GB (4x4GB) G-Skill 1600 RAM NVIDIA EVGA GTX 570 Video Card Crucial 128GB SSD HD Corsair 850W Power Supply Seagate 2TB HDD

    Read the article

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • How to boot load the kernel using EFI stub (efistub) loader?

    - by Pro Backup
    I have Ubuntu 14.04 running in UEFI mode as only operating system, no dual-boot here. The kernel version is 3.13.0-24-generic. There is an EFI partition. In this case the EFI partition is not at the default /dev/sda1 but at /dev/sda3 because I did actually convert BIOS mode to EFI mode. I have used the grub-efi-amd64 package, though that actually loads GRUB boot menu from UEFI firmware boot menu (UEFI boot loads \EFI\ubuntu\grubx64.efi). I want to skip that double boot menu loading step, and boot faster, directly from UEFI into the kernel. The Ubuntu kernels since 12.10 have "Kernel EFI stub loader" feature. I know I do need to copy the Ubuntu kernel to the EFI partition (possibly rename) and create an entry in UEFI boot menu (for instance using efibootmgr). Which exact terminal commands are necessary to do this?

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Will BIOS boot mode Ubuntu install be able to boot when firmware "Fast Boot" is "Ultra Fast"?

    - by Pro Backup
    I have an AsRock mainboard with UEFI BIOS P1.50 02/14/2014. The firmware "Fast Boot" option is set to "Fast", Boot Option #1 is set to "AHCI P4: OCZ-VERT...": this is BIOS not UEFI boot. This boot disk has an MBR partitioning scheme (# parted -l | grep Partition\ Table:). Therefore Ubuntu 14.04 is installed in BIOS/CMS (Grub-PC) mode. The Ubuntu boot process ends in a text console (no GUI). There is no external graphics card in use. The stock Ubuntu kernel is replaced with Ubuntu supplied mainline 3.16.0-031600rc6-generic. dmesg outputs lines containing BIOS, like: SMBIOS 2.7 present Calgary: detecting Calgary via BIOS EBDA area Calgary: Unable to locate Rio Grande table in EBDA - bailing! [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored BIOS EDD facility v0.16 2004-Jun-25, 0 devices found The ASRock BIOS it selves display this help text for "Ultra Fast - Fast Boot": Ultra Fast mode is only supported by Windows 8 and the VBIOS must support UEFI GOP if you are using an external graphics card. Please notice that Ultra Fast mode will boot so fast that the only way to enter this UEFI Setup Utility is to Clear CMOS or run the Restart to UEFI utility in Windows. Assumptions: I suspect after changing UEFI setting "Fast Boot" to "Ultra Fast" that the machine will no longer boot into Ubuntu's console. I expect when first exchanging "Grub-pc" with "Grub-efi", that the machine will still be able to boot to a grub menu (thus allowing to change the "Fast Boot" setting back to "Fast" without clearing CMOS). Are these two "Fast Boot" assumptions correct, and/or, may I expect Ubuntu 14.04 running mainline kernel 3.16rc6 and Grub-efi to still boot to console after enabling UEFI Ultra Fast Boot?

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • rsync on QNAP NAS fails recently

    - by user192702
    I have been using rsync to copy a large backup file from a remote host to my QNAP NAS. It's been working fine until recently. It seems like almost every time when it executes it's giving a time out after 15s. Following is what I have captured in the log. Any ideas? 2013-11-10 23:10:01 HKT - Executing: rsync -t -v -e ssh [email protected]:/home/backup/backup/backup_file-11102013* /share/homes/backup/backup/web/database [receiver] io timeout after 10 seconds -- exiting rsync error: timeout in data send/receive (code 30) at io.c(140) [receiver=3.0.7] rsync: connection unexpectedly closed (73 bytes received so far) [generator] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [generator=3.0.7] 2013-11-10 23:10:15 HKT - Done rsync

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >