Search Results

Search found 40933 results on 1638 pages for 'database tools'.

Page 87/1638 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Database with great read performance

    - by mscava
    I have 10 tables from which 4 contain each up to million rows. All values are inserted at once, and afterwards I only read the data many times. I am searching for a database that would perform greatly when it comes to selecting, joining or other reading etc. What is the most recommended option?

    Read the article

  • Good resource for studying Database High Availability techniques

    - by Invincible
    Hello Can anybody suggest some good resource/book on Database high availability techniques? Moreover, High-availability of system software like Intrusion Prevention system or Web servers. I am considering high-availability is global term which covers clustring, cloud computing, replication, replica management, distributed synchronization for cluster. Thanks in advance!

    Read the article

  • Is it possible to download a large database using mysql query

    - by Rose
    i am downloading files from server using WinSCP.Is it possible to write a query to download a large database using mysql query? Or using any other method i have tried with this code but i am not able to get the whole database structure <?php if(file_exists('backup_sql/my_backup.zip')) { unlink('backup_sql/my_backup.zip'); } $tables='*'; $host='MY HOST NAME'; $user='MY_USERNAME'; $pass='MYPASSWORD'; $name='MY_DB_NAME'; $link = mysql_connect($host,$user,$pass); mysql_select_db($name,$link); //get all of the tables if($tables == '*') { $tables = array(); $result = mysql_query('SHOW TABLES'); while($row = mysql_fetch_row($result)) { $tables[] = $row[0]; } } else { $tables = is_array($tables) ? $tables : explode(',',$tables); } $return=''; //cycle through foreach($tables as $table) { $result = mysql_query('SELECT * FROM '.$table); $num_fields = mysql_num_fields($result); //$return.= 'DROP TABLE '.$table.';'; $row2 = mysql_fetch_row(mysql_query('SHOW CREATE TABLE '.$table)); $return.= "\n\n".$row2[1].";\n\n"; for ($i = 0; $i < $num_fields; $i++) { while($row = mysql_fetch_row($result)) { $return.= 'INSERT INTO '.$table.' VALUES('; for($j=0; $j<$num_fields; $j++) { $row[$j] = addslashes($row[$j]); //$row[$j] = ereg_replace("\n","\\n",$row[$j]); if (isset($row[$j])) { $return.= '"'.$row[$j].'"' ; } else { $return.= '""'; } if ($j<($num_fields-1)) { $return.= ','; } } $return.= ");\n"; } } $return.="\n\n\n"; } $rand_var=time(); $files_to_zip = array( "'backup_sql/db-backup-'.$rand_var.'.sql'", ); $name = 'db-backup-'.$rand_var.'.sql'; $data = $return; ?> any one please help me... thank you

    Read the article

  • Database Design Question

    - by Soo
    Ok SO, I have a user table and want to define groups of users together. The best solution I have for this is to create three database tables as follows: UserTable user_id user_name UserGroupLink group_id member_id GroupInfo group_id group_name This method keeps the member and group information separate. This is just my way of thinking. Is there a better way to do this? Also, what is a good naming convention for tables that link two other tables?

    Read the article

  • What should every developer know about databases?

    - by Aaronaught
    Whether we like it or not, many if not most of us developers either regularly work with databases or may have to work with one someday. And considering the amount of misuse and abuse in the wild, and the volume of database-related questions that come up every day, it's fair to say that there are certain concepts that developers should know - even if they don't design or work with databases today. So: What are the important concepts that developers and other software professionals ought to know about databases? Guidelines for Responses: Keep your list short. One concept per answer is best. Be specific. "Data modelling" may be an important skill, but what does that mean precisely? Explain your rationale. Why is your concept important? Don't just say "use indexes." Don't fall into "best practices." Convince your audience to go learn more. Upvote answers you agree with. Read other people's answers first. One high-ranked answer is a more effective statement than two low-ranked ones. If you have more to add, either add a comment or reference the original. Don't downvote something just because it doesn't apply to you personally. We all work in different domains. The objective here is to provide direction for database novices to gain a well-founded, well-rounded understanding of database design and database-driven development, not to compete for the title of most-important.

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • Compressing a database to a single file?

    - by Assimilater
    Hi all. In my contact manager program I have been storing information by reading and writing comma delimited files for each individual contact, and storing notes in a file for each note, and I'm wondering how I could go about shrinking them all into one file effectively. I have attempted using data entry tools in the visual studio toolbox and template class, though I have never quite figured out how to use them. What would be especially convenient is if I could store data as data type IOwner (a class I created) as opposed to strings. I'd also need to figure out how to tell the program what to do when a file is opened (I've noticed in the properties how to associate a file type with the program though am not sure how to tell it what to do when it's opened). Edit: How about rephrasing the question: I have a class IContact with various properties some of them being lists of other class objects. I have a public list of IContact. Can I write Contacts as List(Of IContact) to a file as opposed to a bunch of strings? Second part of the question: I have associated .cms files with my program. But if a user opens the file, what code should the program run through in an attempt to deal with the file? This file is going to contain data that the program needs to read, how do I tell it to read a file when the program is opened vicariously because the file was opened? Does this make the question clearer?

    Read the article

  • SQL SERVER – FIX : ERROR : 4214 BACKUP LOG cannot be performed because there is no current database

    - by pinaldave
    I recently got following email from one of the reader. Hi Pinal, Even thought my database is in full recovery mode when I try to take log backup I am getting following error. BACKUP LOG cannot be performed because there is no current database backup. (Microsoft.SqlServer.Smo) How to fix it? Thanks, [name and email removed as requested] Solution / Fix: This error can happen when you have never taken full backup of your database and you try to attempt to take backup of the log only. Take full backup once and attempt to take log back up. If the name of your database is MyTestDB follow procedure as following. BACKUP DATABASE [MyTestDB] TO DISK = N'C:\MyTestDB.bak' GO BACKUP LOG [MyTestDB] TO DISK = N'C:\MyTestDB.bak' GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Log

    Read the article

  • CakePHP Missing Database Table Error

    - by BRADINO
    I am baking a new project management application at work and added a couple new tables to the database today. When I went into the console to bake the new models, they were not in the list... php /path/cake/console/cake.php bake all -app /path/app/ So I manually typed in the model name and I got a missing database table for model error. I checked and double-checked and the database table was named properly. Turns out that some files inside the /app/tmp/cache/ folder were causing Cake not to recognize that I had added new tables to my database. Once I deleted the cache files cake instantly recognized my new database tables and I was baking away! rm -Rf /path/app/tmp/cache/cake*

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • What does your Python development workbench look like?

    - by Fabian Fagerholm
    First, a scene-setter to this question: Several questions on this site have to do with selection and comparison of Python IDEs. (The top one currently is What IDE to use for Python). In the answers you can see that many Python programmers use simple text editors, many use sophisticated text editors, and many use a variety of what I would call "actual" integrated development environments – a single program in which all development is done: managing project files, interfacing with a version control system, writing code, refactoring code, making build configurations, writing and executing tests, "drawing" GUIs, and so on. Through its GUI, an IDE supports different kinds of workflows to accomplish different tasks during the journey of writing a program or making changes to an existing one. The exact features vary, but a good IDE has sensible workflows and automates things to let the programmer concentrate on the creative parts of writing software. The non-IDE way of writing large programs relies on a collection of tools that are typically single-purpose; they do "one thing well" as per the Unix philosophy. This "non-integrated development environment" can be thought of as a workbench, supported by the OS and generic interaction through a text or graphical shell. The programmer creates workflows in their mind (or in a wiki?), automates parts and builds a personal workbench, often gradually and as experience accumulates. The learning curve is often steeper than with an IDE, but those who have taken the time to do this can often claim deeper understanding of their tools. (Whether they are better programmers is not part of this question.) With advanced editor-platforms like Emacs, the pieces can be integrated into a whole, while with simpler editors like gedit or TextMate, the shell/terminal is typically the "command center" to drive the workbench. Sometimes people extend an existing IDE to suit their needs. What does your Python development workbench look like? What workflows have you developed and how do they work? For the first question, please give the main "driving" program – the one that you use to control the rest (Emacs, shell, etc.) the "small tools" -- the programs you reach for when doing different tasks For the second question, please describe what the goal of the workflow is (eg. "set up a new project" or "doing initial code design" or "adding a feature" or "executing tests") what steps are in the workflow and what commands you run for each step (eg. in the shell or in Emacs) Also, please describe the context of your work: do you write small one-off scripts, do you do web development (with what framework?), do you write data-munching applications (what kind of data and for what purpose), do you do scientific computing, desktop apps, or something else? Note: A good answer addresses the perspectives above – it doesn't just list a bunch of tools. It will typically be a long answer, not a short one, and will take some thinking to produce; maybe even observing yourself working.

    Read the article

  • Taking the fear out of a Cloud initiative through the use of security tools

    - by user736511
    Typical employees, constituents, and business owners  interact with online services at a level where their knowledge of back-end systems is low, and most of the times, there is no interest in knowing the systems' architecture.  Most application administrators, while partially responsible for these systems' upkeep, have very low interactions with them, at least at an operational, platform level.  Of greatest interest to these groups is the consistent, reliable, and manageable operation of the interfaces with which they communicate.  Introducing the "Cloud" topic in any evolving architecture automatically raises the concerns for data and identity security simply because of the perception that when owning the silicon, enterprises are not able to manage its content.  But is this really true?   In the majority of traditional architectures, data and applications that access it are physically distant from the organization that owns it.  It may reside in a shared data center, or a geographically convenient location that spans large organizations' connectivity capabilities.  In the end, very often, the model of a "traditional" architecture is fairly close to the "new" Cloud architecture.  Most notable difference is that by nature, a Cloud setup uses security as a core function, and not as a necessary add-on. Therefore, following best practices, one can say that data can be safer in the Cloud than in traditional, stove-piped environments where data access is segmented and difficult to audit. The caveat is, of course, what "best practices" consist of, and here is where Oracle's security tools are perfectly suited for the task.  Since Oracle's model is to support very large organizations, it is fundamentally concerned about distributed applications, databases etc and their security, and the related Identity Management Products, or DB Security options reflect that concept.  In the end, consumers of applications and their data are to be served more safely in a controlled Cloud environment, while realizing the many cost savings associated with it. Having very fast resources to serve them (such as the Exa* platform) makes the concept even more attractive.  Finally, if a Cloud strategy does not seem feasible, consider the pros and cons of a traditional vs. a Cloud architecture.  Using the exact same criteria and business goals/traditions, and with Oracle's technology, you might be hard pressed to justify maintaining the technical status quo on security alone. For additional information please visit Oracle's Cloud Security page at: http://www.oracle.com/us/technologies/cloud/cloud-security-428855.html

    Read the article

  • Apache cannot find mysql database modules

    - by user809857
    I've created a simple django project and setup a mysql database. My simple project just creates an entry on the database. The project works fine when I use the built in development server provided by django (runserver) and it works well. But when I deployed the project on Apache and mod_Wsgi (Ubuntu server), django could not find 'books', which is in this case my table in the database. The mysql database that I use in runserver and apache are just the same. I also did rebuild the database using sqlall,validate and syncdb of django but still i get the error. What could be wrong with what I'm doing? Thanks

    Read the article

  • Apache cannot find mysql database modules

    - by user809857
    I've created a simple django project and setup a mysql database. My simple project just creates an entry on the database. The project works fine when I use the built in development server provided by django (runserver) and it works well. But when I deployed the project on Apache and mod_Wsgi (Ubuntu server), django could not find 'books', which is in this case my table in the database. The mysql database that I use in runserver and apache are just the same. I also did rebuild the database using sqlall,validate and syncdb of django but still i get the error. What could be wrong with what I'm doing? Thanks

    Read the article

  • Oracle Database 12c

    - by jgelhaus
    What's your pain? Cost of IT and downtime, Complexity of IT, Poor database application performance? All of the above and more? These are real challenges caused by today's demands on data centers and their IT teams.  Oracle Database 12c provides a breakthrough architecture that makes it easy to deploy and manage databases in the cloud. Oracle partners will leverage Database 12c innovation to provide additional options for Oracle customer success and ROI. Download Oracle Database 12c and plug into the cloud! Join us for our July 10th webcast to learn about this database breakthrough.

    Read the article

  • Lower Your Application Infrastructure Costs w/Oracle Database 11g

    - by john.brust
    Oracle Database 11g is designed to support enterprise applications, including Oracle E-Business Suite, Oracle PeopleSoft, and Oracle Siebel. And every Oracle customer can benefit from the performance, reliability, and security that Oracle Database 11g brings to these applications. Plus, Oracle Database 11g, helps you drive down your IT infrastructure costs. Join us next Friday for a webcast conversation with database expert Mark Townsend, Vice President of Oracle's Server Technology Division, to learn how you can benefit from running your applications on Oracle Database 11g. At the end of the presentation, we'll open up for live Q&A for approximately 30 minutes. Register now for our Friday, April 23rd, 2010 9:30am PT | 12:30pm ET live webcast.

    Read the article

  • How should calculations be handled in a document database

    - by Morten
    Ok, so I have a program that basically logs errors into a nosql database. Right now there is just a single model for an error and its stored as a document in the nosql database. Basically I want to summarize across different errors and produce a summary of the "types" of errors that occured. Traditionally in a SQL database the this normalization would work with groupings, sums and averages but in a NoSQL database I assume I need to use mapreduce. My current model seems unfit for the task, how should I change the way I store "models" in order to make statistical analysis easy? Would a NoSQL database even be the right tool for this type of problem? I'm storing things in Google AppEngine's BigTable, so there are some limitations to think of as well.

    Read the article

  • Podcast: Dell Perot Systems Relies on Oracle In-Memory Database Cache

    - by john.brust
    Recently we spoke with Bill Binko, Technology Consultant at Dell Perot Systems, about a high volume web-based content delivery system they implemented for a client with Oracle In-Memory Database Cache. Their client needed to respond to ~1 billion hits (web requests) per day, but hadn't been able to support this load. Oracle In-Memory Database Cache allowed for multiple & complicated queries to take place without ever hitting the disk...providing sub-millisecond response time and ability to manage much higher high volumes of data. Old System: Old SQL Server Database, over 300 servers, difficult to maintain. New System: One Oracle Database 11g instance, multiple Oracle RAC nodes, backed up by Oracle Data Guard, and Oracle In-Memory Database Cache to cut query response times by 10x. Listen to the podcast.

    Read the article

  • 9/18 Live Webcast: Three Compelling Reasons to Upgrade to Oracle Database 11g

    - by jgelhaus
    Webcast: Three Compelling Reasons to Upgrade to Oracle Database 11g Date: Tuesday, September 18, 2012 Time: 10 a.m. PT/1 p.m. ET If you or your organization is still working with Oracle Database 10g or an even older version, now is the time to upgrade. Oracle Database 11g offers a wide variety of advantages to enhance your operation. Join us for this live Webcast and learn about what you’re missing: the business, operational, and technical benefits. With Oracle Database 11g, you can: Upgrade with zero downtime Improve application performance and database security Reduce the amount of storage required Save time and money Register today here

    Read the article

  • how to write high quality GUI software with qt?

    - by Opetmar
    I want to write a project using QT library, so after learning the library and mastering it how should I start, what other libraries or things that I should learn? are there any other tools that will help me during the development or tools well help the end user to install the software and using it? What are the things that I should be aware? what are the things that I should avoid so it can run efficiently?

    Read the article

  • Building ASP.NET Web Forms to Use a MySQL Database

    The MySQL database is the best open source database which means it can be used for free without obtaining or paying for a license. In ASP.NET 3.5 hosting there are some hosting packages that let you use the MySQL database because it can be a cheaper hosting alternative when compared to using the MS SQL database. However things can be a bit complicated when querying a MySQL database in an ASP.NET environment.... Advance Your IT Career Online IT Degree Programs. Advance Your IT Career While You Work. Search now.

    Read the article

  • Eliminating Downtime During Database Upgrades: A Customer Case Study

    - by irem.radzik(at)oracle.com
    Planned outages, such as database, OS, hardware upgrades and migrations, are a fact of life. Even though they are "planned" and many of them are performed during "off business hours", they can still interrupt operations-- especially for global operations and online businesses. For this reason many IT organizations postpone these critical infrastructure improvement projects, which in turn result in delays in advancing business operations. This week, on Thursday January 13th, we will host a free webcast on this topic, and will feature Oracle GoldenGate's customer Atmos Energy. Atmos Energy implemented Oracle GoldenGate for eliminating downtime during their database upgrade from Oracle Database 8.1.7 to Oracle Database 11.1.0.7. Jos Francis, Lead DBA for Atmos, and Ronald Nedd, Sr. DBA for Atmos, will be presenting their database upgrade project and their solution architecture. Join us at this live webcast and hear from our customer and product management how to eliminate planned outages with Oracle GoldenGate's real-time, heterogeneous data replication capabilities.

    Read the article

  • Should custom data elements be stored as XML or database entries?

    - by meteorainer
    There are a ton of questions like this, but they are mostly very generalized, so I'd like to get some views on my specific usage. General: I'm building a new project on my own in Django. It's focus will be on small businesses. I'd like to make it somewhat customizble for my clients so they can add to their customer/invoice/employee/whatever items. My models would reflect boilerplate items that all ModelX might have. For example: first name last name email address ... Then my user's would be able to add fields for whatever data they might like. I'm still in design phase and am building this myself, so I've got some options. Working on... Right now the 'extra items' models have a FK to the generic model (Customer and CustomerDataPoints for example). All values in the extra data points are stored as char and will be coerced/parced into their actual format at view building. In this build the user could theoretically add whatever values they want, group them in sets and generally access them at will from the views relavent to that model. Pros: Low storage overhead, very extensible, searchable Cons: More sql joins My other option is to use some type of markup, or key-value pairing stored directly onto the boilerplate models. This coul essentially just be any low-overhead method weather XML or literal strings. The view and form generated from the stored data would be taking control of validation and reoganizing on updates. Then it would just dump the data back in as a char/blob/whatever. Something like: <datapoint type='char' value='something' required='true' /> <datapoint type='date' value='01/01/2001' required='false' /> ... Pros: No joins needed, Updates for validation and views are decoupled from data Cons: Much higher storage overhead, limited capacity to search on extra content So my question is: If you didn't live in the contraints impose by your company what method would you use? Why? What benefits or pitfalls do you see down the road for me as a small business trying to help other small businesses? Just to clarify, I am not asking about custom UI elements, those I can handle with forms and template snippets. I'm asking primarily about data storage and retreival of non standardized data relative to a boilerplate model.

    Read the article

  • Webcast Series: Accelerate Business-Critical Database Deployments with Oracle Optimized Solutions

    - by ferhat
    Join us for this two-part Webcast series and learn how to safely consolidate business-critical databases and deliver quantifiable benefits to the business: Save up to 75% in operational and acquisition costs Save millions of dollars consolidating legacy infrastructure Leverage best practices from thousands of customer environments Increase end user productivity with 75% faster time to operations and 4x faster throughput   The Oracle Optimized Solution for Oracle Database  provides extensive guidelines for architecting and deploying complete database solutions that deliver superior performance and availability while minimizing cost and risk. Oracle’s world-class engineering teams work together to define these optimal architectures using Oracle's powerful SPARC M-Series and SPARC T-Series servers together with Oracle Solaris and Oracle's SAN, NAS, and flash-based storage to run the industry-leading Oracle Database. Quite simply, the Oracle Optimized Solution for Oracle Database makes it easier for you to deliver and manage business critical database environments that are fast, secure and cost-effective. Available On-Demand PART 1: Why Architecture Matters When Deploying Business-Critical Databases PART 2: How To Consolidate Databases Using Oracle Optimized Solutions   Presented by: Lawrence McIntosh, Principal Enterprise Architect, Oracle Optimized Solutions Ken Kutzer, Principal Product Manager, Infrastructure Solutions, Oracle  

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >