Search Results

Search found 30375 results on 1215 pages for 'database smell'.

Page 27/1215 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • What are some good tips for a developer trying to design a scalable MySQL database?

    - by CFL_Jeff
    As the question states, I am a developer, not a DBA. I have experience with designing good ER schemas and am fairly knowledgeable about normalization and good schema design. I have also worked with data warehouses that use dimensional modeling with fact tables and dim tables. However, all of the database-driven applications I've developed at previous jobs have been internal applications on the company's intranet, never receiving "real-world traffic". Furthermore, at previous jobs, I have always had a DBA or someone who knew much more than me about these things. At this new job I just started, I've been asked to develop a public-facing application with a MySQL backend and the data stored by this application is expected to grow very rapidly. Oh, and we don't have a DBA. Well, I guess I am the DBA. ;) As far as designing a database to be scalable, I don't even know where to start. Does anyone have any good tips or know of any good educational materials for a developer who has been sort of shoved into a DBA/database designer role and has been tasked with designing a scalable database to support an application like this? Have any other developers been through this sort of thing? What did you do to quickly become good at this role? I've found some good slides on the subject here but it's hard to glean details from slides. Wish I could've attended that guy's talk. I also found a good blog entry called 5 Ways to Boost MySQL Scalability which had some good information, though some of it was over my head. tl;dr I just want to make sure the database doesn't have to be completely redesigned when it scales up, and I'm looking for tips to get it right the first time. The answer I'm looking for is a "list of things every developer should know about making a scalable MySQL database so your application doesn't perform like crap when the data gets huge".

    Read the article

  • split a database web application - good idea or bad idea?

    - by Khou
    Is it a bad idea to split up a application and the database? Application1 uses database1 on ServerX Application2 uses database2 on ServerY Both application communicates over web service API, they are apart of the same application, one application is used to manage user's profile/personal data, while the other application is used to manages user's financial data. Or should just put them together and just use 1 database on the same server?

    Read the article

  • How to choose between UUIDs, autoincrement/sequence keys and sequence tables for database primary keys?

    - by Tim
    I'm looking at the pros and cons of these three primary methods of coming up with primary keys for database rows. So assuming I am using a database that supports more than one of these methods, is there a simple heuristic to determine what the best option would be for me? How do considerations such a distributed/multiple masters, performance requirements, ORM use, security and testing have on the choice? Any unexpected drawbacks that one might run into?

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Installation of Access Database Engine 32-bit Fails

    - by Rayzor78
    I am trying to install Access Database Engine 2007 32-bit. The splash screen comes up, you click "Next", then it fails with the error: Installation ended prematurely because of an error You click "OK" and another error window says: The installation of the package failed. The exact same situation happens when I try this with Access Database Engine 2010 32-bit. This production server is running Windows Server 2008 R2 SP1 64-bit. Before I tried installing Access Database Engine 32-bit, I first needed to install Microsoft Office 2010 Pro (Excel and Office Tools only). I tried the 32-bit version on the production server since that is how I set it up in our Dev environment. No luck. The 32-bit version would not install. I did NOT get the error "You have 64-bit components of Office installed". I simply received the exact same two errors listed above. So, I knew that 32-bit/64-bit did not really matter for the Office install for my project, so I installed 64-bit of Office Pro 2010 (Excel and Office Tools only) with no problems. I have a requirement that I need to have the 32-bit version of the Access Database Engine installed. 2007 or 2010, doesn't matter. I cannot use the 64-bit version of Access Database Engine 2010 because my SSIS package will not work with it. I require the 32-bit version. I've tried several steps to try to get it installed. I seriously think that the production server has some aversion to installing 32-bit applications. Here's what I've tried: Tried installing via command line with the "/passive" switch....no luck. Tried numerous iterations to copy the install file to the server (downloaded a fresh copy directly to the server, downloaded a fresh copy to my local machine then copied it over, copied it over zipped up) (http://social.msdn.microsoft.com/Forums/en-US/sqldataaccess/thread/efd3c1f0-07cd-45ca-a626-2dd0c7ac3e9f). Tried Method 1 from this link. Could not try Method 2 because it requires a server reboot and in my environment that requires a long change management process. I've verified that I am a local administrator on the server. (Evidence, I am able to install other applications (office 64-bit per above)). Verified that there are no other office products that should be blocking the installation. The fore-mentioned install of Excel 2010 64-bit was the first Office product installed on the server. VERY ODD: To test my theory that the production server does not like 32-bit applications, I installed something lightweight. I installed 7-Zip 32-bit on the production server with no problems whatsoever. Here are some things that I have not tried (i will follow-up once I do): Method 2 (as mentioned above). Requires a server reboot. Have not verified that the Dev and Production environments are 100% identical. I've done a cursory check and on the surface they appear to be the same (same OS and SP version). I need to do a deeper dive to be 100% certain. I had no problems in my Dev environment. In Dev, I installed Office 2010 Pro 64-bit (Excel & Office Tools only) then via command line w/ the "/passive" switch, installed Access Database Engine 2010 32-bit. I don't know what else to try. Any suggestions or comments?

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Need Advice on building the database. all in one table or split?

    - by Ibrahim Azhar Armar
    hello, i am developing an application for a real-estate company. the problem i am facing is about implementing the database. however i am just confused on which way to adopt i would appreciate if you could help me out in reasoning the database implementation. here is my situation. a) i have to store the property details in the database. b) the properties have approximately 4-5 categories to which it will belong for ex : resedential, commnercial, industrial etc. c) now the categories have sub-categories. for example. a residential category will have sub category such as. Apartment / Independent House / Villa / Farm House/ Studio Apartment etc. and hence same way commercial and industrial or agricultural will too have sub-categories. d) each sub-categories will have to store the different values. like a resident will have features like Bedrooms/ kitchens / Hall / bathroom etc. the features depends on the sub categories. for an example on how i would want to implement my application you can have a look at this site. http://www.magicbricks.com/bricks/postProperty.html i could possibly think of the solution like this. a) create four to five tables depending upon the categories which will be existing(the problem is categories might increase in the future). b) create different tables for all the features, location, price, description and merge the common property table into one. for example all the property will have the common entity such as location, total area, etc. what would you advice for me given the current situation. thank you

    Read the article

  • Force database read to master if slave data is stale

    - by Jeff Storey
    I previously asked a specific question about this database replication for new user signup to which I got an answer, but I want to ask this in the more general sense. I have a database setup in which I am using a master/slave combination. I am using the slaves for load balancing (the data itself is partitioned/sharded across multiple databases, but each database has X slaves for load balancing). Let's say I write some data to the master. Now I do a subsequent read which hits a slave, but the slave has not yet caught up to the master. Is there a way (which can be done quickly since it will happen frequently) to determine if the data is stale in the slave so I can then route to the master? In my previous question, it was suggested to do simultaneous writes to the cache and the database. This solution seems practical, but there is still a chance that the data may have been removed from the cache but not yet updated in the slave. A possible solution is to ensure the cache is big enough (based on the typical application load) so the data will not be evicted within the time frame it takes to replicate the data. This seems like it may be feasible. Can anyone provide additional insight into this question? Thanks!

    Read the article

  • How to back up a database with thousands of files

    - by Neal
    I am working with a Fedora server that runs a customized software package. The server software is quite old, and its database consists of 1,723 files. The database files are constantly changing - they continually grow and changes are not necessarily appended to the end. So right now, we currently back up every 24 hours at midnight when all users are off of the system and the database is in an internally consistent state. The problem is that we have the potential to lose an entire day's worth of work, which would be unrecoverable. So I'd like to know if there is a way to take some sort of an instantaneous snapshot of these database files that we could back up every 30 minutes or so. I've read about Linux LVM snapshots, and am thinking that I might be able to do accomplish the goal by taking a snapshot, rsync'ing the files to a backup server, then dropping the snapshot. But I've never done this before,so I don't know if this is the "right" fix. Any ideas on this? Any better solutions? Thanks!

    Read the article

  • Synchronize Active Directory to Database

    - by Tommy Jakobsen
    We are in a situation where we would like to offer our customers to be able to manage their users themselves. It is around 300 customers with up to a total of 10.000 users. Besides creating, updating and removing users, they will very often read information about users for statics and other useful informations available. All this functionality, should be available from an Intranet web page (.NET Framework 4) that the users will access through Citrix or similar. Now the problem is that we would really like the users not to query AD directly for each request, but rather make them hit a database that is synchronized with AD. It would be sufficient to run this synchronization a few time each day (maybe every 5. hour). When they create a user, it should not be available right away, but reviewed and then created within two days (the next step would be to remove this manual review, but that's out of scope for this question). What do you think about this synchronization of AD? Does anyone have any experience with it and is it something that is done in other organizations, where you will have lots of requests which is better handled by a database than AD (I presume)? Are there any techniques out there for writing such a script that synchronizes AD with database tables? My primary concern is the groups/members relations which can be rather complicated. Or are there software that synchronizes AD with a database? Any comments will be much appreciated. Thank you.

    Read the article

  • Oracle OpenWorld 2012: Focus On Oracle Database

    - by jgelhaus
    As Oracle OpenWorld approaches and you work to plan your schedule.  We know there's a lot to sort through.  To help we've put together some Oracle Database Focus On Documents to help guide you through the database sessions at the show. Oracle Database Oracle Database Application Development Oracle Database Security Oracle Spatial and Graph Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) Big Data Oracle Exadata Data Warehousing High Availability Oracle Database Utilities Oracle Database Upgrade See you in San Francisco!

    Read the article

  • Oracle Database Express Edition, már 64 bitesen is

    - by user645740
    Az Oracle Database Express Edition egy ingyenes adatbázis-kezelo, amivel ki lehet ingyen próbálni az Oracle Database-t. Support viszont nem áll hozzá rendelkezésre, fórumokat lehet használni ehelyett. Az Oracle Database Express Edition 11gR2 most megjelent 64 bites változatban is: http://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html Az Oracle Database SE One, SE és EE itt érheto el 30 napos kipróbálásra: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

  • Cant get php script to connect to asterisk internal mysql database

    - by Bilbo
    Im trying to get a PHP script to connect to Asterisks internal sql database. I tried the to use the standard method for example $con = mysqli_connect("192.168.1.126","root","mysql","asterisk"); However when I log into the asterisk server to access the mysql database all i need it to type "mysql" and im logged in. Im wondering is it possible for my php script to connect to asterisk internal database. //edit The following mysql error is shown Warning: mysqli_connect(): (HY000/2003): Can't connect to MySQL server on '192.168.1.126' (111) in /var/www/html/project/sipSubScript.php on line 6 Failed to connect to MySQL: Can't connect to MySQL server on '192.168.1.126' (111)

    Read the article

  • Ideas for scaling out database architecture

    - by andrew
    We're looking to scale out our existing database architecture and need some advice on which way to go. We currently have 2 web servers behind a load balancer that both read & write to a single master database which replicates to a slave. Ideally, I'd like each of the webservers to point to their own master DB and have the data between the 2 synchronised but from what I've read, using any kind of master-master or ring-replication is discouraged. I'm looking for a general "what do other people do" kind of answer - database vendor isn't a concern at the moment but we'd like to stay with MySQL or convert to MSSQL. Any ideas would be gratefully received. Many thanks, Andrew

    Read the article

  • Cant connect to asterisk internal database [on hold]

    - by Bilbo
    Im trying to get a PHP script to connect to Asterisks internal mysql database. I tried the to use the standard method for example $con = mysqli_connect("192.168.1.126","root","mysql","asterisk"); However when I log into the asterisk server to access the mysql database all i need it to type "mysql" and im logged in. Im wondering is it possible for my php script to connect to asterisk internal database. The following error is shown: Warning: mysqli_connect(): (HY000/2003): Can't connect to MySQL server on '192.168.1.126' (111) in /var/www/html/project/sipSubScript.php on line 6 Failed to connect to MySQL: Can't connect to MySQL server on '192.168.1.126' (111)

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >