Search Results

Search found 40059 results on 1603 pages for 'database management'.

Page 15/1603 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Cache the result of a MySQLdb database query in memory

    - by ensnare
    Our application fetches the correct database server from a pool of database servers. So each query is really 2 queries, and they look like this: Fetch the correct DB server Execute the query We do this so we can take DB servers online and offline as necessary, as well as for load-balancing. But the first query seems like it could be cached to memory, so it only actually queries the database every 5 or 10 minutes or so. What's the best way to do this? Thanks.

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • Small standalone SQL database similar to access in the old days(ie file database)

    - by Ian
    Hi, I am looking for a easy to use and deploy sql type database i can ship with a desktop application. This will be a small application user's can download from my website. In the vb6 days, access was the common database for small desktop apps, what is my option these days? Looking at SQL CE it seems to have a quite a few limitations such as count(distinct) etc SQL express needs to be installed and running as a service (could i include the SQL express deployments in my deployment so the user doesn't even know its been installed? I assume size would then be an issue) SQL 2005/2008 is not an option due to size and licensing restrictions. I would like to use c#, wpf and entity framework. What would seem to be the best options based on your knowledge and experience? Thanks

    Read the article

  • Setting SQL database Permissions for Visual Studio Data Config Wizard

    - by Raven Dreamer
    Hello, Stackoverflow! I'm new to SQL. I have created a new database in SQL Server Management Studio, and am now trying to attach it to a windows forms project in Visual Studio via the built in Data Configuration Wizard. Currently, whenever I try to attach the database file, I get a permissions error: "You don't have permission to open this file. Contact file owner or administrator to obtain permission" So, simple question -- how do I modify the permissions of my database to allow this?

    Read the article

  • How to Design a SaaS Database

    - by Josh Curren
    I have a web app that I built for a trucking company that I would like to offer as SaaS. What is the best way to design the database? Should I create a new database for each company? Or should I use one database with tables that have a prefix of the company name? Or should I Use one database with one of each table and just add a company id field to the tables? Or is there some other way to do it?

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • SQL SERVER – Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video

    - by pinaldave
    Copy data from one table to another table is one of the most requested questions on forums, Facebook and Twitter. The question has come in many formats and there are places I have seen developers are using cursor instead of this direct method. Earlier I have written the similar article a few years ago - SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE. The article has been very popular and I have received many interesting and constructive comments. However there were two specific comments keep on ending up on my mailbox. 1) SQL Server AdventureWorks Samples Database does not have table I used in the example 2) If there is a video tutorial of the same example. After carefully thinking I decided to build a new set of the scripts for the example which are very similar to the old one as well video tutorial of the same. There was no better place than our SQL in Sixty Second Series to cover this interesting small concept. Let me know what you think of this video. Here is the updated script. -- Method 1 : INSERT INTO SELECT USE AdventureWorks2012 GO ----Create TestTable CREATE TABLE TestTable (FirstName VARCHAR(100), LastName VARCHAR(100)) ----INSERT INTO TestTable using SELECT INSERT INTO TestTable (FirstName, LastName) SELECT FirstName, LastName FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO --------------------------------------------------------- --------------------------------------------------------- -- Method 2 : SELECT INTO USE AdventureWorks2012 GO ----Create new table and insert into table using SELECT INSERT SELECT FirstName, LastName INTO TestTable FROM Person.Person WHERE EmailPromotion = 2 ----Verify that Data in TestTable SELECT FirstName, LastName FROM TestTable ----Clean Up Database DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Data From One Table to Another Table – INSERT INTO SELECT – SELECT INTO TABLE Powershell – Importing CSV File Into Database – Video SQL SERVER – 2005 – Export Data From SQL Server 2005 to Microsoft Excel Datasheet SQL SERVER – Import CSV File into Database Table Using SSIS SQL SERVER – Import CSV File Into SQL Server Using Bulk Insert – Load Comma Delimited File Into SQL Server SQL SERVER – 2005 – Generate Script with Data from DatabaseDatabase Publishing Wizard What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER World Shapefile Download and Upload to Database Spatial Database

    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it.VDS Technologies has all the spatial [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 11gR2 DB 11.2.0.1 Certified with E-Business Suite on Solaris 10 (x86-64)

    - by Steven Chan
    Oracle Database 11g Release 2 version 11.2.0.1 is now certified with Oracle E-Business Suite 11i (11.5.10.2) and Release 12 (12.0.4 or higher, 12.1.1 or higher) on Oracle Solaris on x86-64 (64-bit) running Solaris 10. This announcement includes:Oracle Database 11gR2 version 11.2.0.1 Oracle Database 11gR2 version 11.2.0.1 Real Application Clusters (RAC) Transparent Data Encryption (TDE) Column Encryption with EBS 11i and R12Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for E-Business Suite 11i and R12 Database Instances Transparent Data Encryption (TDE) Tablespace Encryption

    Read the article

  • EPM 11.1.2 - R&A DATABASE CONNECTIONS DISAPPEAR FROM THE "DATABASE CONNECTION MANAGER

    - by Powder
    When accessing the database connection panel through Reporting and Analysis all previously entered database connection do not appear. This is due to a bug in the Windows SMB2 protocol. To work around this bug you have to disable the protocol. On Windows 2008 the protocol is automatically enabled. This needs to be done on both the servers and the clients. Note that “server” is the server which hosts RAF repository service and RM1 folder, “client” – server which hosts replicated Repository service that accesses repository files via network i.e. \\<server_host>\RM1  In order to disable SMB 2.0 on the server side, follow these steps:  1. Run "regedit" on Windows Server 2008 based computer.  2. Expand and locate the sub tree as follows.  HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters  3. Add a new REG_DWORD key with the name of "Smb2" (without quotation mark)  Value name: Smb2  Value type: REG_DWORD  0 = disabled  1 = enabled 4. Set the value to 0 to disable SMB 2.0, or set it to 1 to re-enable SMB 2.0.  5. Reboot the server.  To disable SMB 2.0 for Windows Vista or Windows Server 2008 systems that are the “client” systems run the following commands:  sc config lanmanworkstation depend= bowser/mrxsmb10/nsi  sc config mrxsmb20 start= disabled  Note there's an extra " " (space) after the "=" sign.  To enable back SMB 2.0 for Windows Vista or Windows Server 2008 systems that  are the “client” systems run the following commands: sc config lanmanworkstation depend= bowser/mrxsmb10/mrxsmb20/nsi  sc config mrxsmb20 start= auto  Again, note there's an extra " " (space) after the "=" sign. 

    Read the article

  • Database design and performance impact

    - by Craige
    I have a database design issue that I'm not quite sure how to approach, nor if the benefits out weigh the costs. I'm hoping some P.SE members can give some feedback on my suggested design, as well as any similar experiences they may have came across. As it goes, I am building an application that has large reporting demands. Speed is an important issue, as there will be peak usages throughout the year. This application/database has a multiple-level, many-to-many relationship. eg object a object b object c object d object b has relationship to object a object c has relationship to object b, a object d has relationship to object c, b, a Theoretically, this could go on for unlimited levels, though logic dictates it could only go so far. My idea here, to speed up reporting, would be to create a syndicate table that acts as a global many-to-many join table. In this table (with the given example), one might see: +----------+-----------+---------+ | child_id | parent_id | type_id | +----------+-----------+---------+ | b | a | 1 | | c | b | 2 | | c | a | 3 | | d | c | 4 | | d | b | 5 | | d | a | 6 | +----------+-----------+---------+ Where a, b, c and d would translate to their respective ID's in their respective tables. So, for ease of reporting all of a which exist on object d, one could query SELECT * FROM `syndicates` ... JOINS TO child and parent tables ... WHERE parent_id=a and type_id=6; rather than having a query with a join to each level up the chain. The Problem This table grows exponentially, and in a given year, could easily grow past 20,000 records for one client. Given multiple clients over multiple years, this table will VERY quickly explode to millions of records and beyond. Now, the database will, in time, be partitioned across multiple servers, but I would like (as most would) to keep the number of servers as low as possible while still offering flexibility. Also writes and updates would be exponentially longer (though possibly not noticeable to the end user) as there would be multiple inserts/updates/scans on this table to keep it in sync. Am I going in the right direction here, or am I way off track. What would you do in a similar situation? This solution seems overly complex, but allows the greatest flexibility and fastest read-operations. Sidenote 1 - This structure allows me to add new levels to the tree easily. Sidenote 2 - The database querying for this database is done through an ORM framework.

    Read the article

  • The Future of Project Management is Social

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Kazim Isfahani, Director, Product Marketing, Oracle Rapid Ascent. Breakneck Speed. Lightning Fast. Perhaps even overwhelming. No matter which set of adjectives we use to describe it, social media’s rise into the enterprise mainstream has been unprecedented. Indeed, the big 4 social media powerhouses (Facebook, Google+, LinkedIn, and Twitter), have nearly 2 Billion users between them. You may be asking (as you should really) “That’s all well and good for the consumer, but for me at my company, what’s your point? Beyond the fact that I can check and post updates, that is.” Good question, kind sir. Impact of Social and Collaboration on Project Management I’ll dovetail this discussion to the project management realm, since that’s what I’m writing about. Speed is a big challenge for project-driven organizations. Anything that can help speed up project delivery - be it a new product introduction effort or a geographical expansion project - fast is a good thing. So where does this whole social thing fit particularly since there are already a host of tools to help with traditional project execution? The fact is companies have seen improvements in their productivity by deploying departmental collaboration and other social-oriented solutions. McKinsey’s survey on social tools shows we have reached critical scale: 72% of respondents report that their companies use at least one and over 40% say they are using social networks and blogs. We don’t hear as much about the impact of social media technologies at the project and project manager level, but that does not mean there is none. Consider the new hire. The type of individual entering the workforce and executing on projects is a generation of worker expecting visually appealing, easy to use and easy to understand technology meshing hand-in-hand with business processes. Consider the project manager. The social era has enhanced the role that the project manager must play. Today’s project manager must be a supreme communicator, an influencer, a sympathizer, a negotiator, and still manage to keep all stakeholders in the loop on project progress. Social tools play a significant role in this effort. Now consider the impact to the project team. The way that a project team functions has changed, with newer, social oriented technologies making the process of information dissemination and team communications much more fluid. It’s clear that a shift is occurring where “social” is intersecting with project management. The Rise of Social Project Management We refer to the melding of project management and social networking as Social Project Management. Social Project Management is based upon the philosophy that the project team is one part of an integrated whole, and that valuable and unique abilities exist within the larger organization. For this reason, Social Project Management systems should be integrated into the collaborative platform(s) of an organization, allowing communication to proceed outside the project boundaries. What makes social project management "social" is an implicit awareness where distributed teams build connected links in ways that were previously restricted to teams that were co-located. Just as critical, Social Project Management embraces the vision of seamless online collaboration within a project team, but also provides for, (and enhances) the use of rigorous project management techniques. Social Project Management acknowledges that projects (particularly large projects) are a social activity - people doing work with people, for other people, with commitments to yet other people. The more people (larger projects), the more interpersonal the interactions, and the more social affects the project. The Epitome of Social - Fusion Project Portfolio Management If I take this one level further to discuss Fusion Project Portfolio Management, the notion of Social Project Management is on full display. With Fusion Project Portfolio Management, project team members have a single place for interaction on projects and access to any other resources working within the Fusion ERP applications. This allows team members the opportunity to be informed with greater participation and provide better information. The application’s the visual appeal, and highly graphical nature makes it easy to navigate information. The project activity stream adds to the intuitive user experience. The goal of productivity is pervasive throughout Fusion Project Portfolio Management. Field research conducted with Oracle customers and partners showed that users needed a way to stay in the context of their core transactions and yet easily access social networking tools. This is manifested in the application so when a user executes a business process, they not only have the transactional application at their fingertips, but also have things like e-mail, SMS, text, instant messaging, chat – all providing a number of different ways to interact with people and/or groups of people, both internal and external to the project and enterprise. But in the end, connecting people is relatively easy. The larger issue is finding a way to serve up relevant, system-generated, actionable information, in real time, which will allow for more streamlined execution on key business processes. Fusion Project Portfolio Management’s design concept enables users to create project communities, establish discussion threads, manage event calendars as well as deliver project based work spaces to organize communications within the context of a project – all within a secure business environment. We’d love to hear from you and get your thoughts and ideas about how Social Project Management is impacting your organization. To learn more about Oracle Fusion Project Portfolio Management, please visit this link

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • Oracle Database 11g Implementation Specialist - 14 a 16 Março, 2011

    - by Claudia Costa
    OPN Bootcamp Curso de Especialização em Software OracleCaro Parceiro, O novo programa de parcerias da Oracle assenta na Especialização dos seus seus parceiros. No último ano fiscal muitos parceiros já iniciaram as suas especializações nas temáticas a que estão dedicados e que são prioritárias para o seu negócio. Para apoiar o esforço e dedicação de muitos parceiros na obtenção da certificação dos seus recursos, a equipa local de alianças e canal lançou uma série de iniciativas. Entre elas, a criação deste OPN Bootcamp em conjunto com a Oracle University, especialmente dedicado à formação e preparação para os exames de Implementation, obrigatórios para obter a especialização Oracle Database 11g. Este curso de formação tem o objectivo de preparar os parceiros para o exame de Implementation a realizar já no dia 29 de Março, durante o OPN Satellite Event que terá lugar em Lisboa (outros detalhes sobre este evento serão brevemente comunicados). A sua presença neste curso de preparação nas datas que antecedem o evento OPN Satellite, é fundamental para que os seus técnicos fiquem habilitados a realizar o exame dia 29 de Março com a máxima capacidade e possibilidade de obter resultados positivos. Deste modo, no dia 29 de Março, podem obter a tão desejada certificação, com custos de exame 100% suportados pela Oracle. Contamos com a sua presença! Conteúdo: Oracle Database 11g: 2 Day DBA Release 2 + preparação para o exame 1Z0-154 Oracle Database 11g: Essentials Audiência: - Database Administrators - Technical Administrator- Technical Consultant- Support Engineer Pré Requisitos: Conhecimentos sobre sistema operativo Linux Duração: 3 dias + exame (1 dia)Horário: 9h00 / 18h00Data: 14 a 16 de Março Local: Centro de Formação Oracle Pessoas e Processos Rua do Conde Redondo, 145 - 1º - LisboaAcesso: Metro do Marquês de Pombal Custos de participação: 140€ pax/dia = 420€/pax (3 dias)* - Este preço inclui o exame de Implementation *Custo final para parceiro. Já inclui financiamento da equipa de Alianças e Canal Data e Local do Exame: 29 de Março - Instalações da Oracle University _______________________________________________________________________________________ Inscrições Gratuitas. Lugares Limitados.Reserve já o seu lugar : Email   Para mais informações sobre inscrição: Vítor PereiraFixo: 21 778 38 39 Móvel: 933777099 Fax: 21 778 38 40Para outras informações, por favor contacte: Claudia Costa / 21 423 50 27

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Identity Management as a Controls Infrastructure

    - by Darin Pendergraft
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Identity systems are indispensable to managing online resources, and are becoming increasingly more complex as businesses adapt their current infrastructures to support a broad user population across a wide range of devices. Adding point products to solve problems addresses the short term need, but complicates the longer term management outlook. Download the latest whitepaper HERE to see how Oracle is taking a platform approach to building a scalable and secure controls infrastructure that enables businesses to engage customers and gives employees secure access to corporate resources from anywhere.

    Read the article

  • Representation of data in application versus database

    - by user1815201
    I'm going to make an application that will be given data to put in a database. The data will for the most part be the same, but the way it is formatted will vary a lot (could be in anything from text files to .xls to .doc). I'm not a very experienced developer, but I can see some potential issues and I want to minimize them. First off I have decided to use the DAO pattern, so that I can easily support new file formats or file suddenly formatted in different ways. What I really wonder about though, is how I should manage the data itself within my application. I'm thinking that the database DAO should have models representing each table of the database with the same relations between them, to make the uploading process easy. But should the filesystem DAO's have to use the same models? I can imaging that when the database changes, the change will suddenly propagate throughout the entire system, all DAOs and models alike. And that is obviously a bad thing. I'm a little bit tired and out of time. Will update with what ever questions you have. Thanks!

    Read the article

  • Designing a Database Application with OOP

    - by Tim C
    I often develop SQL database applications using Linq, and my methodology is to build model classes to represent each table, and each table that needs inserting or updating gets a Save() method (which either does an InsertOnSubmit() or SubmitChanges(), depending on the state of the object). Often, when I need to represent a collection of records, I'll create a class that inherits from a List-like object of the atomic class. ex. public class CustomerCollection : CoreCollection<Customer> { } Recently, I was working on an application where end-users were experiencing slowness, where each of the objects needed to be saved to the database if they met a certain criteria. My Save() method was slow, presumably because I was making all kinds of round-trips to the server, and calling DataContext.SubmitChanges() after each atomic save. So, the code might have looked something like this foreach(Customer c in customerCollection) { if(c.ShouldSave()) { c.Save(); } } I worked through multiple strategies to optimize, but ultimately settled on passing a big string of data to a SQL stored procedure, where the string has all the data that represents the records I was working with - it might look something like this: CustomerID:34567;CurrentAddress:23 3rd St;CustomerID:23456;CurrentAddress:123 4th St So, SQL server parses the string, performs the logic to determine appropriateness of save, and then Inserts, Updates, or Ignores. With C#/Linq doing this work, it saved 5-10 records / s. When SQL does it, I get 100 records / s, so there is no denying the Stored Proc is more efficient; however, I hate the solution because it doesn't seem nearly as clean or safe. My real concern is that I don't have any better solutions that hold a candle to the performance of the stored proc solution. Am I doing something obviously wrong in how I'm thinking about designing database applications? Are there better ways of designing database applications?

    Read the article

  • Release/Change management - best aproach

    - by Bob Rivers
    I asked this question an year ago in StackOverflow and never got a good answer. Since Programmers seems to be a better place to ask it, I'll give it a try... What is the better way to work with release management? More specifically what would be the best way to release packages? For example, assuming that you have a relatively stable system, a good quality assurance process (QA), etc. How do you prefer to release new versions? Let's assume that we are talking about a mid to large "centralized" web system (no clients), in-house development. This system can be considered "vital" for a corporate operations. I have a tendency to prefer to do this by releasing packets at regular intervals, not greater than 1 to 3 months. During this period, I will include into the package,fixes and improvements and make the implementation in production environment only once. But I've seen some people who prefer to place small changes in production, but with a greater frequency. The claim of these people is that by doing so, it is easier to identify bugs that have gone through the process of QA: in a package with 10 changes and another with only 1, it is much easier to know what caused the problem in the package with just one change... What is the opinion came from you?

    Read the article

  • apt-get failed install of libg15, all package management is failing

    - by Stifle
    I was trying to get my Logitech G510 keyboard's back-lights working so I went into the Synaptic Package Manager and marked LibG15, G15daemon, and all the other associated packages. Synaptic reported a failed install. Now all Package management is failing due to libg15 being "halfway installed." Some commands I have tried to fix the problem follow. . . root@bt:~# apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo apt-get -f purge libg15 Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo dpkg --configure -a dpkg: dependency problems prevent configuration of g15macro: g15macro depends on g15daemon; however: Package g15daemon is not configured yet. dpkg: error processing g15macro (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of g15stats: g15stats depends on g15daemon; however: Package g15daemon is not configured yet. dpkg: error processing g15stats (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: g15macro g15stats I'm not too computer savvy. Any help would be much appreciated!!! Note: I'm using Ubuntu 10.04 under Backtrack 5 R3.

    Read the article

  • Stakeholder Management in OUM

    - by user719921
    Where is Stakeholder Management in OUM?  Stakeholder Management typically falls into the purview of the Project Manager, which means much of the associated guidance is found in the OUM Manage Focus Area (a.k.a. Manage).  There is no process in Manage named Stakeholder Management, but this “touch point” can be found in a variety of other processes including Bid Transition (BT), Communication Management (CMM) and Organizational Change Management (OCHM). •         Stakeholder management starts in the Bid Transition process with Stakeholder Analysis •         This Stakeholder Analysis is used to build the Project Team Communication Plan in the Communication Management process. •         Stakeholder management should be executed during the Execution and Control phase.  For example, as issues are resolved, the project manager should take the action item to follow up with the affected stakeholders to ensure they are aware that the issue has been resolved. •       The broader topic of Stakeholder management is also addressed very thoroughly in the Organizational Change Management process in the Implement Focus Area, which is a touch point to the Organizational Change Management process in Manage. Check it out and let me know your thoughts!

    Read the article

  • Agile project management, agile development: early integration

    - by Matías Fidemraizer
    I believe that agile works if everything is agile. In software development area, in my opinion, if team members' code is integrated early, code will be more in sync and this has a lot of pros: Early integration helps team members to avoid painful merges. Encourages better coding habits, because everyone makes sure that they don't break co-workers' code everyday. Both developers and architects (code reviewers) may detect bad design decisions or just wrong development directions in real-time, preventing useless work. Actually I'm talking about getting the latest version of code base and checking-in your own code to the source control in a daily basis. When you start your coding day (i.e. you arrive to your work), your first action is updating your code base with the latest version from the source control. In the other hand, when you're about an hour to leave from your work and go home, your last action is checking-in your code to the source control and be sure that your day work doesn't break the project's build process. Rather than updating and checking-in your code once you finished an entire task, I believe the best approach is fixing small and flexible personal milestones and checking-in the code once you finish one of these. I really believe that this coding approach fits better in the agile project management concept. Do you know some document, blog post, wiki, article or whatever that you can suggest me that could be in sync with my opinion?. And, do you find any problem working with this approach?. Thank you in advance.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >