Search Results

Search found 2080 results on 84 pages for 'administration'.

Page 9/84 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Statistical Sampling for Verifying Database Backups

    A DBA's huge workload can start to threaten best practices for data backup and recovery, but ingenuity, and an eye for a good tactic, can usually find a way. For Tom, the revelation about a solution came from eating crabs. Statistical sampling can be brought to bear to minimise the risk of faliure of an emergency database restore.

    Read the article

  • TechEd 2010 Followup

    - by AllenMWhite
    Last week I presented a couple of sessions at Tech Ed NA in New Orleans. It was a great experience, even though my demos didn't always work out as planned. Here are the sessions I presented: DAT01-INT Administrative Demo-Fest for SQL Server 2008 SQL Server 2008 provides a wealth of features aimed at the DBA. In this demofest of features we'll see ways to make administering SQL Server easier and faster such as Centralized Data Management, Performance Data Warehouse, Resource Governor, Backup Compression...(read more)

    Read the article

  • SQL Rally Presentations

    - by AllenMWhite
    As I drove to Dallas for this year's SQL Rally conference (yes, I like to drive) I got a call asking if I could step in for another presenter who had to cancel at the last minute. Life happens, and it's best to be flexible, and I said sure, I can do that. Which presentation would you like me to do? (I'd submitted a few presentations, so it wasn't a problem.) So yesterday I presented "Gathering Performance Metrics With PowerShell" at 8:45AM, and my newest presentation, "Manage SQL Server 2012 on Windows...(read more)

    Read the article

  • Interactive manifest editing with the Automated Installer Manifest Wizard

    - by Glynn Foster
    Oracle Solaris 11.2 adds a new Automated Installer (AI) Manifest Wizard to allow administrators to more easily create AI manifests for use in provisioning new client systems in the data center. The AI Manifest Wizard is a web web based interface that steps administrators through the basics of the AI manifest - target disks and layout selection, additional ZFS pools and datasets, IPS publisher and package selection, and the creation of any Oracle Solaris Zone virtual environments. The end result is an AI manifest without having to directly edit XML, and this can then be associated with an appropriate AI service. To get started, check out How To Create an Automated Installer Manifest with an Interactive Wizard

    Read the article

  • No more admin users?

    - by Kevin
    I am trying to install 12.04 on a Windows 7 machine. I used the Windows installer because I can't get the USB to work. When using the Windows installer it uses the user name on Windows 7. I wanted to change the admin name and set up some standard users. For some reason the admin password would not work and it changed all the users to 'standard.' Now I cannot add any users. Should I just reinstall?

    Read the article

  • I want a non admin user to install software. What commands do I need to add to sudoers?

    - by Chance
    I want to edit the /etc/sudoers file so that a non-admin user can install software via the Software Center in Linux Mint 10. The reason for this is that I want a user to have the capability to install programs, but not make any other configuration changes to the system. So far I have the following (some of these may not make sense, I was just trying whatever I thought of) username ALL= /usr/bin/aptitude username ALL= /usr/bin/dpkg username ALL= /usr/local/bin/apt-get username ALL= /usr/lib/linuxmint/mintUpdate/mintUpdate.py username ALL= /usr/bin/software-center username ALL= /usr/bin/synaptic So far, it allows me to do updates without asking for my password, but it will not let me install software without entering an admin password. I am aware of this question, How can I set the Software Center to install software for non-root users?, but this goes the route of modifying the PolicyKit, whereas I'm interested in a sudo solution, because it seems a simpler way to go.

    Read the article

  • Is it worth it to switch from home-grown remote command interface to using JMX

    - by Sam Goldberg
    Without knowing too much about JMX, I've always assumed that it would be the best approach for building in remote management to our standalone Java server application. Our server application has some minimal remote control capability, using text commands sent via TCP/IP socket to it. Using the home grown approach, it is fairly to add a new command. (Just create new command text, and the code to handle that in the message receiver). On the other hand, we have hardly implemented any commands, even though there are many things we would like to be able to execute remotely. I am trying to weigh the value of moving to incorporating JMX (learning it, and building the interfaces), versus just sticking with the home-grown approach. Does anyone have any experience or advice regarding changing an existing application to use JMX?

    Read the article

  • Exceptional PowerShell DBA Pt 3 - Collation and Fragmentation

    In this final look into his everyday essentials, Laerte Junior provides some useful scripts for the DBA that use an alternative way of error-logging. He shows how to use a PowerShell script to check and, if necessary, to defragment your indexes, write data to a SQL Server table, and change the collation for a table. Being an exceptional DBA just got a little easier.

    Read the article

  • My Last "Catch-Up" Post for 2010 Content

    - by KKline
    I did a lot of writing in 2010. Unfortunately, I didn't do a good job of keeping all of that writing equally distributed throughout all of the channels where I'm active. So here are a few more posts from my blog, put on-line during the months of November and December 2010, that I didn't get posted here on SQLBlog.com: 1. It's Time to Upgrade! So many of my customers and many of you, dear readers, are still on SQL Server 2005. Join Kevin Kline , SQL Server MVP and SQL Server Technology Strategist...(read more)

    Read the article

  • Using Substring() in XML FLOWR Queries

    - by Jonathan Kehayias
    Tonight I was monitoring the #sqlhelp hashtag on Twitter for a response to a question I asked when Randy Knight ( Twitter ) asked a question about using SUBSTRING in FLOWR statements with XML. #sqlhelp Is there a way to do a SQL Type "LIKE" or "SUBSTRING" in the where clause of FLWOR statement? Need to evaluate just first n chars. By the time I posted a response, Randy had figured out how to use the contains() function to solve his problem, but I am going to blog this because...(read more)

    Read the article

  • Database Delivery Patterns and Practices

    Continuous database delivery is an automated process for building, deploying and testing databases to reduce risk and make rapid releases possible. It's enabled by a pipeline that starts when database changes are checked in, and ends when they're deployed to production. The articles collected here will help you understand the theories and methodologies behind every stage of the database delivery pipeline.

    Read the article

  • Transparent Data Encryption

    Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server. With Transparent Data Encryption in place, this requires the original encryption certificate and master key. It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco explains fully, and guides you through the process of setting it up.

    Read the article

  • A new clients come into my web agency. How to configure email and social accounts to work better? [on hold]

    - by Marco Panichi
    I created websites for many years but still have not found the right way to organize all the email and social accounts of every clients. I mean, every web agency follows dozens of customers. Each client needs at least Google Analytics, AdWords, a Facebook page, a Twitter profile, a Youtube channel, probably a listing on Google Places and maybe a Mail Chimp (or similar) account. The web agency, in my opinion, must own these accounts, use them to deliver results to the customer and -of course- make them available to the customer for two reasons: - The customer must be able to see how things are going - The client must have the ability to change web agency without suffering The web agency, however, has many problems in having all of these accounts. For example, I like the idea of having a Gmail account for each client and from that account use all the products of Google. But is not possible to create more than many Gmail account from the same ip address and with the same phone number. The web agency could invite the customer to create his own accounts but: - This is not necessary a value for the customer (indeed...) - The web agency would manage them, however, from the same ip address, incurring in problems - If phone verification occurs, the web agency has to disturb the customer for verification Have you the same problem? How to solve it?

    Read the article

  • How to Run files and open folders as an administrater 11.10

    - by Mattlinux1
    Solved!!! Enable open as administrator in nautilus! To get started, press Ctrl – Alt – T on your keyboard to open Terminal. When it opens, run the commands below: enter code here sudo apt-get install nautilus-gksu After installing that application run copy and paste the line below and press enter. sudo cp /usr/lib/nautilus/extensions-2.0/libnautilus-gksu.so /usr/lib/nautilus/extensions-3.0/ Finally, log out and back in then go and test it, by clicking your right mouse click on the file you want to run as Admin and your see a popup menu with the words: open as administrater. Visit: http://www.liberiangeek.net/2011/12/add-open-as-administrator-to-nautilus-context-menu-in-ubuntu-11-10-oneiric-ocelot/ for help and from here. Enjoy!

    Read the article

  • A case for not installing your own software

    - by James Gentsch
    This week I watched some of the Oracle Open World presentations (from the comfort of my Oracle office) and happened on some of Larry Ellison’s comments about cloud computing and engineered systems.  Larry said he sees the move to these as analogous to the moves made by the original adopters of electricity.  The argument goes that the first consumers of electricity had to set up their own power plant.  Then, as the market and infrastructure for electricity matured, power consumers moved from using their own personal power plant to purchasing power from another entity that was focused on power production as their primary product. In the end this was a cheaper and more reliable solution. Now, there are lots of compelling reasons to be looking very seriously at cloud computing and engineered systems for enterprise application deployment.  However, speaking as a software developer of enterprise applications, the part of this that I really love (besides Larry’s early electricity adopter analogy) is that as a mode of application deployment it provides me and my customers a consistent environment in which the applications I am providing will be run.  This cuts way down on the environmental surprises that consistently lead to the hated “well, it works here” situation with the support desk. And just to be clear, I think I hate this situation more than my clients, who I think are happy that at least it is working somewhere.  I hate this because when a problem happens, and let’s face it customers are not wasting their time calling in easy problems, we are seriously disabled when we cannot reproduce the issue which is triggered by something unforeseen in the environment where the application is running.  This situation is incredibly frustrating and an all too often occurrence. I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for.  I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make “well, it works here” a well forgotten phase for future software developers. It may even impact the stress squeeze toy industry.  Well, maybe at least for my group.

    Read the article

  • Exploring In-memory OLTP Engine (Hekaton) in SQL Server 2014 CTP1

    The continuing drop in the price of memory has made fast in-memory OLTP increasingly viable. SQL Server 2014 allows you to migrate the most-used tables in an existing database to memory-optimised 'Hekaton' technology, but how you balance between disk tables and in-memory tables for optimum performance requires judgement and experiment. What is this technology, and how can you exploit it? Rob Garrison explains.

    Read the article

  • Database Maintenance Scripting Done Right

    - by KKline
    I first wrote about useful database maintenance scripts on my SQLBlog account way back in 2008. Hmmm - now that I think about it, I first wrote about my own useful database maintenance scripts in a journal called SQL Server Professional back in the mid-1990's on SQL Server v6.5 or some such. But I digress... Anyway, I pointed out a couple useful sites where you could get some good scripts that would take care of preventative maintenance on your SQL Server, such as index defragmentation, updating...(read more)

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

  • Managing Data Growth in SQL Server

    'Help, my database ate my disk drives!'. Many DBAs spend most of their time dealing with variations of the problem of database processes consuming too much disk space. This happens because of errors such as incorrect configurations for recovery models, data growth for large objects and queries that overtax TempDB resources. Rodney describes, with some feeling, the errors that can lead to this sort of crisis for the working DBA, and their solution.

    Read the article

  • More than one way to skin an Audit

    - by BuckWoody
    I get asked quite a bit about auditing in SQL Server. By "audit", people mean everything from tracking logins to finding out exactly who ran a particular SELECT statement. In the really early versions of SQL Server, we didn't have a great story for very granular audits, so lots of workarounds were suggested. As time progressed, more and more audit capabilities were added to the product, and in typical database platform fashion, as we added a feature we didn't often take the others away. So now, instead of not having an option to audit actions by users, you might face the opposite problem - too many ways to audit! You can read more about the options you have for tracking users here: http://msdn.microsoft.com/en-us/library/cc280526(v=SQL.100).aspx  In SQL Server 2008, we introduced SQL Server Audit, which uses Extended Events to really get a simple way to implement high-level or granular auditing. You can read more about that here: http://msdn.microsoft.com/en-us/library/dd392015.aspx  As with any feature, you should understand what your needs are first. Auditing isn't "free" in the performance sense, so you need to make sure you're only auditing what you need to. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Top 10 Oracle Solaris How To Articles

    - by Glynn Foster
    While generating new technical content for Oracle Solaris 11 is one of our higher priorities here at Oracle, it's always fun to have a look at some web stats to see what existing published content is popular among our audience. So here's the top ten as voted by your browsers. Interestingly it's a great mix of technologies. What's your favourite? Let us know! RankHow To Articles 1.Taking your first steps with Oracle Solaris 11 2.How to get started creating Zones on Oracle Solaris 11 3.How to script Oracle Solaris 11 Zone creation for a network in a box configuration 4.How to configure Oracle Solaris 11 using the sysconfig command 5.How to update Oracle Solaris 11 systems using Support Repository Updates 6.How to perform system archival and recovery with Oracle Solaris 11 7.Introducing the basics of IPS on Oracle Solaris 11 8.How to update to Oracle Solaris 11.1 using IPS 9.How to set up Automated Installer services on Oracle Solaris 11 10.How to live install from Oracle Solaris 10 to Oracle Solaris 11 11/11

    Read the article

  • Can't get into the admin console after migrating to new server

    - by Emerson
    I migrated my WordPress blog to a new server, and everything seemed to be working fine until it started giving me the error when entering the admin area: Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 4864 bytes) in /home/neworder/public_html/blog/wp-admin/includes/plugin.php on line 729 The line 729 has: $protected = array( '_wp_attached_file', '_wp_attachment_metadata', '_wp_old_slug', '_wp_page_template' ); I had installed the maintenance-mode, and I have suspicions that this is what broke the forum. If I remove the plugin it then gives another error: Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 19456 bytes) in /home/neworder/public_html/blog/wp-admin/includes/post.php on line 1158 And that line has: $content .= '<p class="hide-if-no-js">' . esc_html__( 'Remove featured image' ) . '</p>'; } I tried to restore the blog file-system from the old server and also to restore the database from the old server (2x), but still it gives me the same error. The blog itself seems to be working fine: http://blog.antinovaordemmundial.com/

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Process Improvement and the Data Professional

    - by BuckWoody
    Don’t be afraid of that title – I’m not talking about Six Sigma or anything super-formal here. In many organizations, there are more folks in other IT roles than in the Data Professional area. In other words, there are more developers, system administrators and so on than there are the “DBA” role. That means we often have more to do than the time we need to do it. And, oddly enough, the first thing that is sacrificed is process improvement – the little things we need to do to make the day go faster in the first place. Then we get even more behind, the work piles up and…well, you know all about that. Earlier I challenged you to find 10-30 minutes a day to study. Some folks wrote back and asked “where do I start”? Well, why not be super-efficient and combine that time with learning how to make yourself more efficient? Try out a new scripting language, learn a new tool that automates things or find out ways others have automated their systems. In general, find out what you’re doing and how, and then see if that can be improved. It’s kind of like doing a performance tuning gig on yourself! If you’re pressed for time, look for bite-sized articles (like the ones I’ve done here for PowerShell and SQL Server) that you can follow in a “serial” fashion. In a short time you’ll have a new set of knowledge you can use to make your day faster. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >