Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 41/319 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • Is tcerl for Mnesia production ready? Is there any alternatives?

    - by Sanoj
    I would like to create a scalable web service using Mnesia as database. However Mnesia per default isn't scalable for persistent storgage since it is using Dets (which has a 2GB limit) as backend. I have seen discussions about extending Mnesia with MnesiaEx and use tcerl as backend. It sounds good and have showed good performance. However, I have seen in a talk about Tokyo Cabinet and CouchDB with Mnesia that there are some issues: issues with durability issues with memory leaks issues with crashes Is tcerl + Mnesia really production ready? And is there any other alternatives? How doe´s companies overcome these issues if they use Mnesia in bigger systems? Is there a working solution with Mnesia and Tokyo Tyrant that is working better?

    Read the article

  • Is it easy to switch from relational to non-relational databases with Rails?

    - by Tam
    Good day, I have been using Rails/Mysql for the past while but I have been hearing about Cassandra, MongoDB, CouchDB and other document-store DB/Non-relational databases. I'm planning to explore them later as they might be better alternative for scalability. I'm planning to start an application soon. Will it make a different with Rails design if I move from relational to non-relational database? I know Rails migrations are database-agnostic but wasn't sure if moving to non-relational will make difference with design or not.

    Read the article

  • w3wp.exe in ASP.NET production app is using 100% CPU. How to find the problem ?

    - by Tucker
    Hi, we have an asp.net app in production where w3wp.exe is taking 100% CPU ( 4 cores - 4 threads at 25% ) and cpu load never goes down until we recycle the application pool ( the app is alone in the application pool ). Our error log has nothing, there is no exceptions being emitted ( or at least we don't catch them ) so we suspect it's code problem ( infinite loop / deadlock ). The problem only arises after some hours running with high load ( several thousand users ). There is any way to profile one of the EXISTING threads that is causing cpu load ? After taking a look to JetBrains's DotTrace Profiler seems like it's not possible for limitations of Profiling API and man, we haven't get to reproduce the problem in our testing environment. The app uses SQL Server 2005, LINQ2SQL and System.Transactions API. Any suggestion to find the problem ?

    Read the article

  • What happens when auto_increment on integer column reaches the max_value in databases?

    - by Sanoj
    I am implementing a database application and I will use both JavaDB and MySQL as database. I have an ID column in my tables that has integer as type and I use the databases auto_increment-function for the value. But what happens when I get more than 2 (or 4) billion posts and integer is not enough? Is the integer overflowed and continues or is an exception thrown that I can handle? Yes, I could change to long as datatype, but how do I check when that is needed? And I think there is problem with getting the last_inserted_id()-functions if I use long as datatype for the ID-column.

    Read the article

  • Preventing Rails from opening production.log, when it is deployed with jruby-rack into tomcat.

    - by gregor
    I have to to deploy a Ruby on Rails application to an tomcat application server using jruby-rack. Due to security reasons my customer has the webapps directory mounted read only and he won't changed this. The problem that arises is, that the rails logger wants to open the WEB-INF/log/production.log in write mode. He writes nothing to this file because log4j and friends are configured and working, but rails complains that the file is not readable. Are there any suggestions from preventing rails from opening this file?

    Read the article

  • Are there any small scale, durable document/object databases?

    - by Joe Doyle
    I have a few .Net projects that would benefit from using a document/object database opposed to a relational one. I think that db4o would be a good choice, but we're not sure how much the cost is. I'd love to use MongoDB but it's design isn't for small scale, single server applications. Are there other options out there that I just haven't run across for small scale applications? EDIT: So is this a space that doesn't have a good solution, yet? Are there no small scale & durable document databases? Would my best choice be to use MongoDB and set the --syncdelay option set to 1?

    Read the article

  • Why is Rails is trying to rerun migrations on production?

    - by ryeguy
    On my server when deploying the app for the first time, I ran rake db:setup which loads my entire migration history from schema.rb. Now I have more stuff I want to add, but when I run rake db:migrate on my server I realize it's trying to run my very first migration, which is failing since the table obviously exists. Examining the schema_migrations table on my production server, I realize it only has one entry in it, which is the migration that was the most current at the time of the initial deployment. Isn't it supposed to have the entire migration history in it? If so, what caused this? If not, why is it doing this?

    Read the article

  • How to think in data stores instead of databases?

    - by Jim
    As an example, Google App Engine uses data stores, not a database, to store data. Does anybody have any tips for using data stores instead of databases? It seems I've trained my mind to think 100% in object relationships that map directly to table structures, and now it's hard to see anything differently. I can understand some of the benefits of data stores (e.g. performance and the ability to distribute data), but some good database functionality is sacrificed (e.g. joins). Does anybody who has worked with data stores like BigTable have any good advice to working with them?

    Read the article

  • Should I be concerned about performance with connections to multiple databases?

    - by Josh Ryan
    I have a PHP app that is running on an Apache server with MySQL databases. Based on the subdomain that users access, I am connecting them to a database (sub1.domain.com connects to database_sub1 and sub2.domain.com connects to database_sub2). Right now there are 10 subdomain-database combos, but that number could potentially grow to well over 100. So, is this a bad thing? Considering my situation, is myslq_pconnect a better way to go? Thanks, and please let me know if more info would be helpful. Josh

    Read the article

  • Are there Python ORMs out there that support multiple independent databases concurrently in use?

    - by sdt
    I'm writing an application in Python where I wish to use sqlite as the backing store for documents edited by the app, with documents generally living in memory, but being saved to disk-based databases when the application saves. Ideally I'd like to use something like an ORM to make access to the data from my Python application code simple. Unfortunately it seems like the majority of Python ORMs, including SQLAlchemy, SQLObject, Django, and Storm, associate the database connection (or engine or whatever) with the classes representing table data, rather than instances of those classes. This restricts these ORMs to working with a single database connection across all instances. Since I'd like to support having multiple documents open simultaneously, this isn't going to work for me. Are there any ORMs out there that support this usage model in Python? Bazaar seems to support this, but it's quite out of date, and at first glance appears to have some other shortcomings. Thanks for any suggestions!

    Read the article

  • How to deal with databases for websites written in Java, more specifically Wicket?

    - by John
    Hi there. I'm new to website development using Java but I've got started with Wicket and make a little website. I'd like to expand on what I've already made (a website with a form, labels and links) and implement database connectivity. I've looked at a couple of examples, in example Mystic Paste, and I see that they're using Hibernate and Spring. I've never touched Hibernate or Spring before and to be honest the heavy use of annotations scare me a little bit as I haven't really made use of them before, with the exception of supressing warnings and overriding. At this point I have one Connection object which I set up in the WebApplication class upon initialization. I then retrieve this connection object whenever I need to perform queries. I don't know if this is a bad approach or not for a production web application. All help is greatly appreciated.

    Read the article

  • Does anyone have a backup strategy for SQL Azure databases?

    - by Pete
    I'm using SQL Azure on a project and it works great. The problem is that the usual backup features do not exist. I have exported the database a couple of times using SQLAzureMW ( http://sqlazuremw.codeplex.com/ ) but this tool is now choking trying to download the database data with bcp. In any case, it's not as nice a solution as SQL Server backups. Is anyone aware of a commercial or open source tool, or other technique, for making reliable backups of SQL Azure databases? This is really a showstopper.

    Read the article

  • How should my application keep clients in sync with schema changes to HTML5 databases?

    - by Chad Johnson
    I'm wanting to incorporate HTML5 database storage into my web application to make it online-accessible. I've done lots of development in server-side environments with databases, and we all know that database schema additions and modifications are often necessary. I am wondering what should happen if my application uses an offline database schema, and that schema changes. How do I prevent the application from breaking on the client side? How do I ensure the database is always up to date on the client end? Anyone have any solutions?

    Read the article

  • g++ - is using the "-g" flag for production builds a good idea?

    - by Grigory
    Just to give some context, I'm talking about compiling C++ code with g++ here. I can see how including the -g flag for production builds would be convenient for maintenance: the program will be much easier to debug if it crashes unexpectedly. My question here is, does including the -g flag affect the output executable in any other way than increasing its size? Can it somehow make the code slower (e.g. by turning off certain optimizations)? From what I understand, it shouldn't (the documentation only mentions the inclusion of debug symbols), but I'm not sure.

    Read the article

  • How do I move Zend Framework From Development to Production?

    - by dirtylogic
    I'm just wondering if anyone else has had problems moving the Zend Framework from development to production. I changed my docroot to the public folder, updated my library path, but it's still not working out for me. The IndexController is working just fine, but my ServiceController is giving me an internal server error. ServiceController <?php class ServiceController extends Zend_Controller_Action { public function amfAction() { require_once APPLICATION_PATH . '/models/MyClass.php'; $srv = new Zend_Amf_Server(); $srv->setClass('Model_MyClass', 'MyClass'); echo $srv->handle(); exit; } }

    Read the article

  • How to make a database service in Netbeans 6.5 to connect to SQLite databases?

    - by farzad
    I use Netbeans IDE (6.5) and I have a SQLite 2.x database. I installed a JDBC SQLite driver from zentus.com and added a new driver in Nebeans services panel. Then tried to connect to my database file from Services Databases using this URL for my database: jdbc:sqlite:/home/farzad/netbeans/myproject/mydb.sqlite but it fails to connect. I get this exception: org.netbeans.modules.db.dataview.meta.DBException: Unable to Connect to database : DatabaseConnection[name='jdbc:sqlite://home/farzad/netbeans/myproject/mydb.sqlite [ on session]'] at org.netbeans.modules.db.dataview.output.SQLExecutionHelper.initialDataLoad(SQLExecutionHelper.java:103) at org.netbeans.modules.db.dataview.output.DataView.create(DataView.java:101) at org.netbeans.modules.db.dataview.api.DataView.create(DataView.java:71) at org.netbeans.modules.db.sql.execute.SQLExecuteHelper.execute(SQLExecuteHelper.java:105) at org.netbeans.modules.db.sql.loader.SQLEditorSupport$SQLExecutor.run(SQLEditorSupport.java:480) at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572) [catch] at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997) What should I do? :(

    Read the article

  • How do I list all tables in all databases in SQL Server in a single result set?

    - by msorens
    I am looking for T-SQL code to list all tables in all databases in SQL Server (at least in SS2005 and SS2008; would be nice to also apply to SS2000). The catch, however, is that I would like a single result set. This precludes the otherwise excellent answer from Pinal Dave: sp_msforeachdb 'select "?" AS db, * from [?].sys.tables' The above stored proc generates one result set per database, which is fine if you are in an IDE like SSMS that can display multiple result sets. However, I want a single result set because I want a query that is essentially a "find" tool: if I add a clause like WHERE tablename like '%accounts' then it would tell me where to find my BillAccounts, ClientAccounts, and VendorAccounts tables regardless of which database they reside in.

    Read the article

  • LINQ to SQL for tables across databases. Or View?

    - by BritishDeveloper
    I have a Message table and a User table. Both are in separate databases. There is a userID in the Message table that is used to join to the User table to find things like userName. How can I create this in LINQ to SQL? I can't seem to do a cross database join. Should I create a View in the database and use that instead? Will that work? What will happen to CRUD against it? E.g. if I delete a message - surely it won't delete the user? I'd imagine it would throw an error. What to do? I can't move the tables into the same database!

    Read the article

  • How would I implement separate databases for reading and writing operations?

    - by Matt
    I am interested in implementing an architecture that has two databases one for read operations and the other for writes. I have never implemented something like this and have always built single database, highly normalised systems so I am not quite sure where to begin. I have a few parts to this question. 1. What would be a good resource to find out more about this achitecture? 2. Is it just a question of replicating between two identical schemas, or would your schemas differ depending on the operations, would normalisation vary too? 3. How do you insure that data written to one database is immediately available for reading from the second? Any further help, tips, resources would be appreciated. Thanks.

    Read the article

  • Can we use union of two sqlite databases with same tables for Core Data?

    - by Tofrizer
    Hi All, I have an iPhone Core Data app with a pre-populated sqlite "baseline" database. Can I add a second smaller sqlite database with the same tables as my pre-populated "baseline" database but with additional / complementary data such that Core Data will happily union the data from both databases and, ultimately, present to me as if it was all a single data source? Idea that I had is: 1) the "baseline" database never changes. 2) I can download the smaller "complementary" sqlite database for additional data as and when I need to (I'm assuming downloading sqlite database is allowed, please comment if otherwise). 3) Core Data is then able to union data from 1 & 2. I can then reference this unified data by calling my defined Core Data managed object model. Hope this makes sense. Thanks in advance.

    Read the article

  • Which relational databases exist with a public API for a high level language?

    - by Jens Schauder
    We typically interface with a RDBMS through SQL. I.e. we create a sql string and send it to the server through JDBC or ODBC or something similar. Are there any RDBMS that allow direct interfacing with the database engine through some API in Java, C#, C or similar? I would expect an API that allows constructs like this (in some arbitrary pseudo code): Iterator iter = engine.getIndex("myIndex").getReferencesForValue("23"); for (Reference ref: iter){ Row row = engine.getTable("mytable").getRow(ref); } I guess something like this is hidden somewhere in (and available from) open source databases, but I am looking for something that is officially supported as a public API, so one finds at least a note in the release notes, when it changes. In order to make this a question that actually has a 'best' answer: I prefer languages in the order given above and I will prefer mature APIs over prototypes and research work, although these are welcome as well.

    Read the article

  • Mercurial - How do you export your repo's source to a production site?

    - by orokusaki
    I've tried using archive in Tortoise HG by opening my repo change log. This doesn't seem to be anything like SVN's export command, where I can simply export a remote repository to the current directory. I use this to get a clean copy of my source for production (without notes and repository data). How can I do something like this in HG? Or, should I just use clone and deal with the repo related data manually? BTW, I need to do this all via command line since I'm not going to be using Tortoise HG on my Linux server. Any help is much appreciated.

    Read the article

  • PHP: How to begin testing large, existing codebase, and test for regression on production site?

    - by anonymous coward
    I'm in charge of at least one large body of existing PHP code, that desperately needs tests, and as well I need some method of checking the production site for errors. I've been working with PHP for many years, but am unfortunately new to testing. (Sorry!). While writing tests for code that has predictable outcomes seems easy enough, I'm having trouble wrapping my head around just how I can test the live site, to ensure proper output. I know that in a test environment, I could set up the database in a known state... but are there proper methods or techniques for testing a live site? Where should I begin? [I am aware of PHPUnit and SimpleTest, but haven't chosen one over the other yet]

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >