Search Results

Search found 32538 results on 1302 pages for 'restore database'.

Page 278/1302 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Oracle export problem

    - by abhiram86
    I use exp username/password@servername owner={ownername} file=backuppath; The parameters given are correct but still not able to connect Below is the error trace: EXP-00056: ORACLE error 12545 encountered ORA-12545: Connect failed because target host or object does not exist EXP-00000: Export terminated unsuccessfully What can be the possible reasons?

    Read the article

  • Is the community MySQL safe for production use?

    - by n_kips
    Or Will I need to get the enterprise version? This is because I found this on MySQL's site: If you are running a MySQL production level system, we would like to direct your attention to the product description of MySQL Enterprise Edition at: http://mysql.com/products/enterprise/ When I check the features, it seems like the community edition does not support transactions, while the enterprise version does. If it is true that the community edition is not right for production, then it seems like posgresql may be my way out, for it supports transactions and it is fully opensource. Will the sql syntax need to change (much) if I have to change? Thank you.

    Read the article

  • Connecting mySQL to MSSQL

    - by user180198
    I need a little advise, I need to sync a DB that is currently on a Server 2008(SQL Server 2005) machine and I use Studio Express to connect to it. I need a way of syncing this DB to mysql that lives on a NAS on the same network: Local: DB Engine on server, named, server\sqlexpress and IP = 10.0.0.201 Target: DB on NAS, named, CISCO-NAS and IP = 10.0.0.182 Will need for this to sync every few mins... I really don't know how to start.

    Read the article

  • Mysql Replication out of sync? What commands do I run to sync it back up?

    - by Alex
    I have a master-master replication system. However, due to an auto-increment issue, I got an error in replication...and it stopped replicating. Someone told me to do: stop slave; SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; start slave; It didn't work. Then they told me to do: SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 2; It didn't work. Then to test it out, I did: SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 99999; It starts, but it is not updating. I created a table on DB1...and it is not showing up on DB2... Below are the SHOW STATUS for both my DB1 and DB2 (I hit them together): mysql> show master status\G *************************** 1. row *************************** File: mysql-bin.000605 Position: 2019727 Binlog_Do_DB: Binlog_Ignore_DB: 1 row in set (0.00 sec) mysql> show slave status\G; *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: Master_User: Master_Port: Connect_Retry: 60 Master_Log_File: mysql-bin.000605 Read_Master_Log_Pos: 2008810 Relay_Log_File: mysqld-relay-bin.001731 Relay_Log_Pos: 10176595 Relay_Master_Log_File: mysql-bin.000470 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 4255373725 Exec_Master_Log_Pos: 10176458 Relay_Log_Space: 135062517347 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 1376343 1 row in set (0.00 sec) How do I fix it so that they sync back up again? Thank you.

    Read the article

  • Server Design Enquiry [closed]

    - by Brandon Gelfand
    I am really new to this and am having trouble with how I can store information from a website. I am going to be making a site similar to dropbox for my company and there is gonna be around 40 TB of space that we need to be able to access on the go, for instance pull the file up on the phone or off our laptop using the internet. Obviously using an Amazon S3 server is not the most cost effective way of doing this so how can I make my own server to hold all of the info and communicate to it from a website? Please I need as much help as possible, like a blueprint or something. Thanks and sorry for the noobish question but I can't really find an exact answer to my question.

    Read the article

  • MySQL max_user_connections

    - by Sheriffen
    We're releasing a site in a couple of weeks that has been developed on a local machine but now when were testing on dev server we get MySQL error 'max_user_connections'. We have talked to the host company (biggest in sweden) and they say that we don't close our connections properly. But the thing is that we user the EXACT same code on another host where it works. And I also added echo "closed"; in the database_close function so that now in the very bottom of very page there is "closed". To me this means that we do close the connection, anyone got any idea of what could be wrong? We connect through the PHP PDO function and closes it by setting it to 'null', all according to the manual.

    Read the article

  • Missing Password check

    - by AAA
    I am using the code below, it checks for empty fields and verifies email, but even if the password is correct it won't login. the password has been inserted with md5 protection, below is the code. I am new to this so please bare with me. Thanks! PHP: session_start(); //Checks if there is a login cookie if(isset($_COOKIE['ID_my_site'])) //if there is, it logs you in and directes you to the members page { $email = $_COOKIE['ID_my_site']; $pass = $_COOKIE['Key_my_site']; $check = mysql_query("SELECT * FROM accounts WHERE email = '$email'")or die(mysql_error()); while($info = mysql_fetch_array( $check )) { if ($pass != $info['password']) { } else { header("Location: home.php"); } } } //if the login form is submitted if (isset($_POST['submit'])) { // if form has been submitted // makes sure they filled it in if(!$_POST['email'] | !$_POST['password']) { die('You did not fill in a required field.'); } // checks it against the database if (!get_magic_quotes_gpc()) { $_POST['email'] = addslashes($_POST['email']); } $check = mysql_query("SELECT * FROM accounts WHERE email = '".$_POST['email']."'")or die(mysql_error()); //Gives error if user dosen't exist $check2 = mysql_num_rows($check); if ($check2 == 0) { die('That user does not exist in our database. <a href=add.php>Click Here to Register</a>'); } while($info = mysql_fetch_array( $check )) { $_POST['password'] = stripslashes($_POST['password']); $info['password'] = stripslashes($info['password']); $_POST['password'] = md5($_POST['password']); //gives error if the password is wrong if ($_POST['password'] != $info['password']) { die('Incorrect password, please try again.'); } else { // if login is ok then we add a cookie $_POST['email'] = stripslashes($_POST['email']); $hour = time() + 3600; setcookie(ID_my_site, $_POST['email'], $hour); setcookie(Key_my_site, $_POST['password'], $hour); //then redirect them to the members area header("Location: home.php"); } } } else { // if they are not logged in <form action="<?php echo $_SERVER['PHP_SELF']?>" method="post"> <table border="0"> <tr><td colspan=2><h1>Login</h1></td></tr> <tr><td>email:</td><td> <input type="text" name="email" maxlength="40"> </td></tr> <tr><td>Password:</td><td> <input type="password" name="password" maxlength="50"> </td></tr> <tr><td colspan="2" align="right"> <input type="submit" name="submit" value="Login"> </td></tr> </table> </form> } Here is the registration code: PHP: // here we encrypt the password and add slashes if needed $_POST['password'] = md5($_POST['password']); if (!get_magic_quotes_gpc()) { $_POST['password'] = mysql_escape_string($_POST['password']); $_POST['email'] = mysql_escape_string($_POST['email']); $_POST['full_name'] = mysql_escape_string($_POST['full_name']); $_POST['user_url'] = mysql_escape_string($_POST['user_url']); } // now we insert it into the database $insert = "INSERT INTO accounts (Uniquer, Full_name, Email, Password, User_url) VALUES ('".$uniquer."','".$_POST['full_name']."', '".$_POST['email']."','".$_POST['password']."', '".$_POST['user_url']."')"; $add_member = mysql_query($insert); After using ini_set function i got to see the error, i am getting this message but not sure what it means: Notice: Undefined index: password in /var/www/domain.com/htdocs/login.php on line 103 Notice: Use of undefined constant password - assumed 'password' in /var/www/domain.com/htdocs/login.php on line 11

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • Survey Data Model - How to avoid EAV and excessive denormalization?

    - by AlexDPC
    Hi everyone, My database skills are mediocre at best and I have to design a data model for survey data. I have spent some thoughts on this and right now I feel that I am stuck between some kind of EAV model and a design involving hundreds of tables, each with hundreds of columns (and thousands of records). There must be a better way to do this and I hope that the wise folks on this forum can help me. I have already searched various forums, but I couldn't really find a solution. If it has already been given elsewhere, please excuse me and provide me with a link so I can read it up. Some assumptions about the data I have to deal with: Each survey consists of 1 to n questionnaires Each questionnaire consists of 100-2,000 questions (please ignore that 2,000 questions really sound like a lot to answer...) Questions can be of various types: multiple-choice, free text, a number (like age, income, percentages, ...) Each survey involves 10-200 countries (These are not the respondents. The respondents are actually people in the countries.) Depending on the type of questionnaire, each questionnaire is answered by 100-20,000 respondents per country. A country can adapt the questionnaires for a survey, i.e. add, remove or edit questions The data for one country is gathered in a separate database in that country. There is no possibility for online integration from the start. The data for all countries has to be integrated later. This means for example, if a country has deleted a question, that data must somehow be derived from what they sent in order to achieve a uniform design across all countries I will have to write the integration and cleaning software, which will need to work with every country's data In the end the data needs to be exported to flat files, one rectangular grid per country and questionnaire. I have already discussed this topic with people from various backgrounds and have not come to a good solution yet. I mainly got two kinds of opinions. The domain experts, who are used to working with flat files (spreadsheet-style) for data processing and analysis vote for a denormalized structure with loads of tables and columns as I described above (1 table per country and questionnaire). This sounds terrible to me, because I learned that wide tables are to be avoided, it will be annoying to determine which columns are actually in a table when working with it, the database will become cluttered with hundreds of tables (or I even need to set up multiple databases, each with a similar yet a bit differetn design), etc. O-O-programmers vote for a strongly "normalized" design, which would effectively lead to a central table containing all the answers from all respondents to all questions. This table would either need to contain a column of type sql_variant type or multiple answer columns with different types to store answers of different types (multiple choice, free text, ..). The former would essentially be a EAV model. I tend to follow Joe Celko here, who strongly discourages its use (he calls it OTLT or "One True Lookup Table"). The latter would imply that each row would contain null cells for the not applicable types by design. Another alternative I could think of would be to create one table per answer type, i.e., one for multiple-choice questions, one for free text questions, etc.. That's not so generic, it would lead to a lot of union joins, I think and I would have to add a table if a new answer type is invented. Sorry for boring you with all this text and thank you for your input! Cheers, Alex PS: I asked the same question here: http://www.eggheadcafe.com/community/aspnet/13/10242616/survey-data-model--how-to-avoid-eav-and-excessive-denormalization.aspx

    Read the article

  • Why Do I See the "In Recovery" Msg, and How Can I Prevent it?

    - by John Hansen
    The project I'm working on creates a local copy of the SQL Server database for each SVN branch you work on. We're running SQL Server 2008 Express with Advanced Services on our local machine to host it. When we create a new branch, the build script will create a new database with the ID of that branch, creates the schema objects, and copies over a selection of data from the production shadow server. After the database is created, it, or other databases on the local machine, will often go into "In Recovery" mode for several minutes. After several refreshes it comes up and is happy, but will occasionally go back into "In Recovery" mode. The database is created in simple recovery mode. The file names aren't specified, so it uses default paths for files. The size of the database after loading data is ~400 megs. It is running in SQL Server 2005 compatibility mode. The command that creates the database is: sqlcmd -S $(DBServer) -Q "IF NOT EXISTS (SELECT [name] FROM sysdatabases WHERE [name] = '$(DBName)') BEGIN CREATE DATABASE [$(DBName)]; print 'Created $(DBName)'; END" ...where $(DBName) and $(DBServer) are MSBuild parameters. I got a nice clean log file this morning. When I turned on my computer it starts all five databases. However, two of them show transactions being rolled forward and backwards. The it just keeps trying to start up all five of the databases. 2010-06-10 08:24:59.74 spid52 Starting up database 'ASPState'. 2010-06-10 08:24:59.82 spid52 Starting up database 'CommunityLibrary'. 2010-06-10 08:25:03.97 spid52 Starting up database 'DLG-R8441'. 2010-06-10 08:25:05.07 spid52 2 transactions rolled forward in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 0 transactions rolled back in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 Recovery is writing a checkpoint in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:11.23 spid52 Starting up database 'DLG-R8979'. 2010-06-10 08:25:12.31 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:13.17 spid52 2 transactions rolled forward in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 0 transactions rolled back in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 Recovery is writing a checkpoint in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:18.43 spid52 Starting up database 'Rls QA'. 2010-06-10 08:25:19.13 spid46s Starting up database 'DLG-R8979'. 2010-06-10 08:25:23.29 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:27.91 spid52 Starting up database 'ASPState'. 2010-06-10 08:25:29.80 spid41s Starting up database 'DLG-R8979'. 2010-06-10 08:25:31.22 spid52 Starting up database 'Rls QA'. In this case it kept trying to start the databases continuously until I shut down SQL Server at 08:48:19.72, 23 minutes later. Meanwhile, I actually am able to use the databases much of the time.

    Read the article

  • Why doesn't PHP's oci_connect return false?

    - by absolethe
    I have a situation in which we have two production databases that synchronize with one other. Server One is considered the primary. Sometimes due to maintenance or a disaster Server Two will become primary. In some of our code that means we have to manually go in and edit the server name for database connections. I find this annoying, so the last thing I wrote I put the server information for both and set up a loop. If oci_connect failed on the Server One 3 times it would move on to Server Two. If Server Two failed 3 times it would notify the user a connection couldn't be made. This has worked fine most times we've had the situation of switching the servers. Yesterday, for example, it worked fine. Today it didn't. It just sat and spun endlessly. No error in the PHP error log. No failure to move on from. No error output to the screen. Nothing for 5 minutes. So then I had to manually edit the stupid config file. I asked what could possibly be different and I was told "yesterday the database was down, but not the server. today the server is down." Okay...? But I don't see a distinction. I would expect oci_connect to return false if it can't establish any sort of communication with the server. I'd expect it to timeout and error. Not just pass it on when it receives an error code from the server. What if there's a network problem, for example? Is this a bug in oci_connect or is there a possibility that something in our PHP configuration gives oci_connect a crazily long timeout? If it is a sort of "bug" is there some way I can check to see if the server is up first? Like a ping? (Of course when I did a ping through the command prompt I got a response from Server One and then was told, "it's back now" although I am skeptical about the timing on that.) Anyway, if anyone could shed some light on why oci_connect might run endlessly without failing and how to keep it from doing so I'd be grateful. -- Edit: My code looks like the examples on PHP.net only in some loops. $count = count($servers); for($i = 0; $i < $count; $i++){ if((!isset($connection)) || ($connection == false)){ // Attempt to connect to the oracle database $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); // Try again if there was a failure if(($connection == false) || (isset($con_error))){ // Three (two more) tries per alternative for($j = $st; $j < $fn; $j++){ // Try again to connect $connection = @oci_connect($servers[$i]["user"], $servers[$i]["pass"], $servers[$i]["conid"]) or ($conn_error = oracle_error()); } // for($j = 2; $j < 4; $j++) } // if($connection == false) } // if(!isset($connection) || ($connection == false)) } // for($i = 0; $i < $count; $i++)

    Read the article

  • OpenWorld 2012—Is Almost Here!

    - by Scott McNeil
    With OpenWorld fast approaching, I thought I would take this opportunity to look at some of the “must see” database manageability activities and sessions happening this year. Here's a quick run down: Oracle Database Manageability: Download all the details for sessions, hands-on-labs, and demos (PDF) Keynotes: Sunday, September 30 Hardware and Software, Engineered to Work Together: Why It’s A Different Approach Larry Ellison, CEO, Oracle Monday, October 1 Shift Complexity Hosted by Mark Hurd, President, Oracle Andrew Mendelsohn, Senior Vice President, Database Server Technologies, Oracle IOUG SIG Sunday: Database Performance Tuning: Getting the Best out of Oracle Enterprise Manager Cloud Control 12c (session ID# CON6511) Oracle DEMOgrounds: Floor plan – Moscone South Automatic Application and SQL Tuning Automatic Performance Diagnostics Complete Database Lifecycle Management Data Masking and Data Subsetting Database Testing with Oracle Real Application Testing Oracle Enterprise Manager Cloud Control 12c Overview Oracle Exadata Management Hands-on-Labs: Database Performance Testing, Data Masking, and Subsetting (session ID# HOL10720) Database Performance Tuning Hands-on Lab (session ID# HOL10393) Sessions: What’s Next for Oracle Database? (session ID# GEN8259) Building and Managing a Private Oracle Database Cloud (session ID# GEN11421) Using Oracle Enterprise Manager to Manage Your Own Private Cloud (session ID# GEN11423) Extreme Database Management with the Latest Generation of Database Technology (session ID# CON9547) Oracle OpenWorld Music Festival New this year is Oracle’s first annual Oracle OpenWorld Musical Festival, featuring some of today's breakthrough musicians from around the country and the world. It's five nights of back-to-back performances in the heart of San Francisco—free to registered attendees. See the lineup Not Heading to OpenWorld—Watch it Live! Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Oracle buys Secerno

    - by Paulo Folgado
    Adds Heterogeneous Database Firewall to Oracle's Industry-leading Database Security SolutionsRedwood Shores, CA - May 20, 2010News FactsOracle has agreed to acquire Secerno, a provider of database firewall solutions for Oracle and non-Oracle databases.Organizations require a comprehensive security solution which includes database firewall functionality to prevent sophisticated attacks from reaching databases.Secerno's solution adds a critical defensive layer of security around databases, which blocks unauthorized activity in real-time.Secerno's products are expected to augment Oracle's industry-leading portfolio of database security solutions, including Oracle Advanced Security, Oracle Database Vault and Oracle Audit Vault to further ensure data privacy, protect against threats, and enable regulatory compliance.The combination of Oracle and Secerno underscores Oracle's commitment to provide customers with the most comprehensive and advanced security offering that helps reduce the costs and complexity of securing their information throughout the enterprise.The transaction is expected to close before end of June 2010. Financial details of the transaction were not disclosed.Supporting Quote:"The Secerno acquisition is in direct response to increasing customer challenges around mitigating database security risk," said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. "Secerno's database firewall product acts as a first line of defense against external threats and unauthorized internal access with a protective perimeter around Oracle and non-Oracle databases. Together, Oracle's complete set of database security solutions and Secerno's technology will provide customers with the ability to safeguard their critical business information.""As a provider of database firewall solutions that help customers safeguard their enterprise databases, Secerno is a natural addition to Oracle's industry-leading database security solutions," said Steve Hurn, CEO Secerno. "Secerno has been providing enterprises and their IT Security departments strong assurance that their databases are protected from attacks and breaches. We are excited to bring Secerno's domain expertise to Oracle, and ensure continuity and success for our current customers, partners and prospects."Support Resources:About Oracle and SecernoGeneral PresentationFAQCustomer LetterPartner Letter

    Read the article

  • What is the difference between Row Level Security and RPD security?

    - by Jeffrey McDaniel
    Row level security (RLS) is a feature of Oracle Enterprise Edition database. RLS enforces security policies on the database level. This means any query executed against the database will respect the specific security applied through these policies. For P6 Reporting Database, these policies are applied during the ETL process. This gives database users the ability to access data with security enforcement even outside of the Oracle Business Intelligence application. RLS is a new feature of P6 Reporting Database starting in version 3.0. This allows for maximum security enforcement outside of the ETL and inside of Oracle Business Intelligence (Analysis and Dashboards). Policies are defined against the STAR tables based on Primavera Project and Resource security. RLS is the security method of Oracle Enterprise Edition customers. See previous blogs and P6 Reporting Database Installation and Configuration guide for more on security specifics. To allow the use of Oracle Standard Edition database for those with a small database (as defined in the P6 Reporting Database Sizing and Planning guide) an RPD with non-RLS is also available. RPD security is enforced by adding specific criteria to the physical and business layers of the RPD for those tables that contain projects and resources, and those fields that are cost fields vs. non cost fields. With the RPD security method Oracle Business Intelligence enforces security. RLS security is the default security method. Additional steps are required at installation and ETL run time for those Oracle Standard Edition customers who use RPD security. The RPD method of security enforcement existed from P6 Reporting Database 2.0/P6 Analytics 1.0 up until RLS became available in P6 Reporting Database 3.0\P6 Analytics 2.0.

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle provides many interesting ways to ensure High Availability. Dataguard configurations, RAC configurations or even both (as recommended for a Maximum Available Architecture - MAA) are the most frequently found. But when it comes to protecting a system with an Active/Passive architecture with failover capabilities, one often thinks to expensive third party cluster systems. Oracle Clusterware technology, which comes free with Oracle Database, is – in the knowing of most people - often linked to Oracle RAC and therefore, is rarely used to implement failover solutions. 11gR2 Clusterware – which is part of Oracle Grid Infrastructure - provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make “failover-able'” and, therefore to protect, almost every kind of application (from xclock to the more complex Application Server) In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional. Unfortunately, there can always be a typo or a mistake. This blog entry is not a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it…   Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 – Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let’s name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill –9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let’s save the definition of this resource, for future use : mkdir –p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let’s remove it from the OCR definitions, we don’t need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop and check the database. It is self-explaining and contains nothing special. Just be aware that it is run as Oracle user (because of the ACL property – see later) and needs to know about the environment. It also needs to be present on every node of the cluster. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more “MS$ like” method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions.   References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • SqlMetal, Sql Server 2008 database, Table with HierachyID, dal cs file is created sometimes ?

    - by judek.mp
    I have 2 databases with a 2 tables with HierachyID fields. For one database I can get a dal cs file, for the other database I cannot get a dal cs file ? HBus is a database I can get the dal cs for, ... SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext This kicks me out a file, ... which works, but excludes the HierarchyID field for the table and includes all other fields for that table. This is OK I do not mind. The above cmd line kicks out an warning but still produces a file, like so SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.HMsg' from SqlServer because the column's DbType is a user-defined type (UDT). Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.vwHMsg' from SqlServer because the column's DbType is a user-defined type (UDT). HMsg is a table with a HierarchyID field. I have another database, Elf, almost the same thing but I get a warning and an Error when using sql metal and I do not get a dal cs file ... SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext An error as well as the warning and the cs file fails to appear on my disc, ... :-( SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.EntityLink' from SqlServer because the column's DbType is a user-defined type (UDT). Error : Requested value 'ELF.SYS.HIERARCHYID' was not found. The fields are declared the same way in Elf db OrgNode [HierarchyID] null , in HBus db ... OrgNode [HierarchyID] null , Both databases are in the same instance of sql server 2008, so the HierarchyID is an inbuilt type, neither db has HierarchyID udt ,... cheers in advance for any replies ...

    Read the article

  • ODBC: when is the best time to create my database?

    - by mawg
    I have a windows program which generates PGP forms which will be filled in later. Those PHP forms will populate a database. It looks very much like MySql, but I can't be certain, so let's call it ODBC. And, yes, it does have to be a windows program. There will also be PHP forms which query the database - examine which tables and fields it contains and then generates forms which can be used to search the database (e.g, it finds a table with fields "employee_name", etc and generates a form which lets you search based on employee name. Let's call that design time and run time. At design time, some manager or IT guy or similar gets to define the nature of the database and at runtime 1) a worker fills in the form daily and 2) management can extract reports. Here's my question: given that the database is defined at "design time" (and populated at run time), where and how is best to do so? 1 I could use an ODBC interface from the windows program, but I am having difficulty finding something good to work with Delphi. Things like ADO and firebird tend to expect you to already have a database and allow you to manipulate it, but I can find no code example of how to create a database and some tables, so ... 2 I could used DOS commands from Delphi in my windows program. I just tried and got a response to MySql --version, but am not sure if MySql etc are more interactive. That is, can I use a script file or a very long stacked command with semicolons and returns separating? e.g 'CREATE DATABASE db; CREATE TABLE t1;' 3) Since the best way to work with databases seems to be PHP, perhaps my windows program could spit out a PHP page which would, when run in a browser, create the database. I have tried to make this as uncomplicated as I can, but please feel free to ask questions. It may be that there are several valid ways, but there is probably one 'better' solution in terms of ease of implementation or maintenance. Better scratch option 3. What if the user later wants to come back and have the windows program change the input form? It needs to update the database too.

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • alphanumeric and special character sorting

    - by Kaushik Gopal
    Hi ppl, I wanted to know the different standards of sorting. To be more specific take the sample set: (Please note there's capitals, small letters, special characters, null values and numbers here) A a 3F Zx - 1Ad NULL How would the Oracle Database sort this by default? How would LINQ sort this by default? How would db2 sort this by default? (the following may get even more vague) How does the Windows platform sort this? (I mean say you have a couple of filenames, by default how would this get treated in a name sort) How does the *nix platform sort this? Is there some sort of standard for alphanumeric/special character sorting? The Windows operating system orders with numbers first, then alphabets. The Oracle database however treats alphabets first. I'm not sure of the *nix platform. It would be nice to have one place to know all these rules for the most common platforms (listed in questions above). Would the gurus throw some light on this topic? Cheers, K

    Read the article

  • Complex Query on cassandra

    - by Sadiqur Rahman
    I have heard on cassandra database engine few days ago and searching for a good documentation on it. after studying on cassandra I got cassandra is more scalable than other data engine. I also read on Amazon SimpleDB but as SimpleDB has a limitation 10GB/table and Google Datastore is slower than Amazon SimpleDB, I prefer not to use them (Google Datastore, Amazon SimpleDB). So for making our site scaled specially high write rates with massive data, I like to use Cassandra as our Data Engine. But before starting using cassandra I am confused on "How to handle complex data using casssandra". I am giving you the MySQL database structure below, Please read this and give me a good suggestion. Users Table hasColum ID Primary hasColum email Unique hasColum FirstName hasColum LastName Category Table hasColum ID Primary hasColum Parent hasColum Category Posts Table hasColum ID Primary hasColum UID Index foreign key linked to users-ID hasColum CID Index foreign key linked to Category-ID hasColum Title hasColum Post Index hasColum PunDate Comments hasColum ID primary hasColum UID Index foreign key linked to users-ID hasColum PID Index foreign key linked to Posts-ID hasColum Comment User Group hasColum ID primary hasColum Name UserToGroup Table (for many to many relation only) hasColum UID foreign key linked to Users-ID hasColum GID foreign key linked to Group-ID Finally for your information, I like to use SimpleCassie PHP Class http://code.google.com/p/simpletools-php/ So, it will be very helpful if you can give me example using SimpleCassie

    Read the article

  • NHibernate query with Projections.Cast to DateTime

    - by stiank81
    I'm experimenting with using a string for storing different kind of data types in a database. When I do queries I need to cast the strings to the right type in the query itself. I'm using .Net with NHibernate, and was glad to learn that there exists functionality for this. Consider the simple class: public class Foo { public string Text { get; set; } } I successfully use Projections.Cast to cast to numeric values, e.g. the following query correctly returns all Foos with an interger stored as int - between 1-10. var result = Session.CreateCriteria<Foo>() .Add(Restrictions.Between(Projections.Cast(NHibernateUtil.Int32, Projections.Property("Text")), 1, 10)) .List<Foo>(); Now if I try using this for DateTime I'm not able to make it work no matter what I try. Why?! var date = new DateTime(2010, 5, 21, 11, 30, 00); AddFooToDb(new Foo { Text = date.ToString() } ); // Will add it to the database... var result = Session .CreateCriteria<Foo>() .Add(Restrictions.Eq(Projections.Cast(NHibernateUtil.DateTime, Projections.Property("Text")), date)) .List<Foo>();

    Read the article

  • Drop Down list null value not working for selected text c#?

    - by Tamara JQ
    Hi guys, Basically I have a drop down list which is populated with text and values depending on a selected index of a radio button list. The code is as follows: protected void RBLGender_SelectedIndexChanged(object sender, EventArgs e) { DDLTrous.Items.Clear(); DDLShoes.Items.Clear(); if (RBLGender.SelectedValue.Equals("Male")) { String[] MaleTrouserText = {"[N/A]", "28\"", "30\"", "32\"", "34\"", "36\"", "38\"", "40\"", "42\"", "44\"", "48\""}; String[] MaleTrouserValue = { null, "28\"", "30\"", "32\"", "34\"", "36\"", "38\"", "40\"", "42\"", "44\"", "48\"" }; String[] MaleShoeText = { "[N/A]", "38M", "39M", "40M", "41M", "42M", "43M", "44M", "45M", "46M", "47M" }; String[] MaleShoeValue = { null, "38M", "39M", "40M", "41M", "42M", "43M", "44M", "45M", "46M", "47M" }; for (int i = 0; i < MaleTrouserText.Length; i++) { ListItem MaleList = new ListItem(MaleTrouserText[i], MaleTrouserValue[i]); DDLTrous.Items.Add(MaleList); } for (int i = 0; i < MaleShoeText.Length; i++) { ListItem MaleShoeList = new ListItem(MaleShoeText[i], MaleShoeValue[i]); DDLShoes.Items.Add(MaleShoeList); } } else if (RBLGender.SelectedValue.Equals("Female"))` When the text '[N/A]' is selected I want the value NULL to be inserted into the database. At present when '[N/A]' is selected the value entered into the database is [N/A] does anyone know why? Thanks in advance.

    Read the article

  • doublechecking: no db-wide 'unicode switch' for sql server in the foreseeable future, i.e. like Orac

    - by user72150
    Hi all, I believe I know the answer to this question, but wanted to confirm: Question Does Sql server (or will it in the foreseeable future), offer a database-wide "unicode switch" which says "store all characters in unicode (UTF-16, UCS-2, etc)", i.e. like Oracle. The Context Our application has provided "CJK" (Chinese-Japanese-Korean) support for years--using Oracle as the db store. Recently folks have been asking for the same support in sql server. We store our db schema definition in xml and generate the vendor-specific definitions (oracle, sql server) using vendor-specific xsl. We can make the change easily. The problem is for upgrades. Generated scripts would need to change the column types for 100+ columns from varchar to nvarchar, varchar(max) to nvarchar(max), etc. These changes require dropping and recreating indexes and foreign keys if the any indexes/fk's exist on the column. Non-trivial. Risky. DB-wide character encodings for us would eliminate programming changes. (I.e. we would not to change the column types from varchar to nvarchar; sql server would correctly store unicode data in varchar columns). I had thought that eventually sql server would "see the light" and allow storing unicode in varchar/clob columns. Evidently not yet. Recap So just to triple check: does mssql offer a database-wide switch for character encoding? Will it in SQL2008R3? or 2010? thanks, bill

    Read the article

  • Application passwords and SQLite security

    - by Bryan
    I have been searching on google for information regarding application passwords and SQLite security for some time, and nothing that I have found has really answered my questions. Here is what I am trying to figure out: 1) My application is going to have an optional password activity that will be called when the application is first opened. My questions for this are a) If I store the password via android preference or SQLite database, how can I ensure security and privacy for the password, and b) how should password recovery be handled? Regarding b) from above, I have thought about requiring an email address when the password feature is enabled, and also a password hint question for use when requesting password recovery. Upon successfully answering the hint question, the password is then emailed to the email address that was submitted. I am not completely confident in the security and privacy of the email method, especially if the email is sent when the user is connected to an open, public wireless network. 2) My application will be using an SQLite database, which will be stored on the SD card if the user has one. Regardless of whether it is stored on the phone or the SD card, what options do I have for data encryption, and how does that affect the application performance? Thanks in advance for time taken to answer these questions. I think that there may be other developers struggling with the same concerns.

    Read the article

  • Data Access Layer, Best Practices

    - by labratmatt
    I'm looking for input on the best way to refactor the data access layer (DAL) in my PHP based web app. I follow an MVC pattern: PHP/HTML/CSS/etc. views on the front end, PHP controllers/services in the middle, and a PHP DAL sitting on top of a relational database in the model. Pretty standard stuff. Things are working fine, but my DAL is getting large (codesmell?) and becoming a bit unwieldy. My DAL contains almost all of the logic to interface with my database and is full of functions that look like this: function getUser($user_id) { $statement = "select id, name from users where user_id=:user_id"; PDO builds statement and fetchs results as an array return $array_of_results_generated_by_PDO_fetch_method; } Notes: The logic in my controller only interacts with the model using functions like the above in the DAL I am not using a framework (I'm of the opinion that PHP is a templating language and there's no need to inject complexity via a framework) I generally use PHP as a procedural language and tend to shy away from its OOP approach (I enjoy OOP development but prefer to keep that complexity out of PHP) What approaches have you taken when your DAL has reached this point? Do I have a fundamental design problem? Do I simply need to chop my DAL into a number of smaller files (logically divide it)? Thanks.

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >