Search Results

Search found 39503 results on 1581 pages for 'database tool'.

Page 12/1581 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Where can I get a cheap database, no web hosting needed

    - by PhilipK
    I'm building an application which requires a fairly small online MySQL database. I don't need any web hosting. What are some cheap options for an online database? *Edit(a bit more about what I'll be using it for) The database itself is very small it contains market statistics for 5 weeks of time. Once a week the data will be updated, so that it always contains the most recent 5 weeks. Then I will use the data in that to create an XML file which is generated with PHP. The XML file will need to be accessed hundreds-thousands of times per month.*

    Read the article

  • If unexpected database changes cause you problems – we can help!

    - by Chris Smith
    Have you ever been surprised by an unexpected difference between you database environments? Have you ever found that your Staging database is not the same as your Production database, even though it was the week before? Has an emergency hotfix suddenly appeared in Production over the weekend without your knowledge? Has your client secretly added a couple of indices to their local version of the database to aid performance? Worse still, has a developer ever accidently run a SQL script against the wrong database without noticing their mistake? If you’ve answered “Yes” to any of the above questions then you’ve suffered from ‘drift’. Database drift is where the state of a database (schema, particularly) has moved away from its expected or official state over time. The upshot is that the database is in an unknown or poorly-understood state. Even if these unexpected changes are not destructive, drift can be a big problem when it’s time to release a new version of the database. A deployment to a target database in an unexpected state can error and fail, potentially delaying a vital, time-sensitive update. A big issue with drift is that it can be hard to spot and it can be even harder to determine its provenance. So, before you can deal with an issue caused by drift, you’ll need to know exactly what change has been made, who made it, when they made it and why they made it. Those questions can take a lot of effort to answer. Then you actually need to decide what to do. Do you rollback the change because it was bad? Retrospectively apply it to the Staging environment because it is a required change? Or script the change into version control to get it back in line with your process? Red Gate’s Database Delivery Team have been talking to DBAs, database consultants and database developers to explore the problem of drift. We’ve started to get a really good idea of how big a problem it can be and what database professionals need to know and do, in order to deal with it.  It’s fair to say, we’re pretty excited at the prospect of creating a tool that will really help and we’ve got some great feedback on our initial ideas (see image below).   We’re now well underway with the development of our new drift-spotting product – SQL Lighthouse – and we hope to have a beta release out towards the end of July. What we really need is your help to shape the product into a great tool. So, if database drift is a problem that you’d like help solving and are interested in finding out more about our product, join our mailing list to register your interest in trying out the beta release. Subscribe to our mailing list

    Read the article

  • Database Management System do exist?

    - by Bakaburg
    I want to build a database for my no-profit association, and i was planning to do it by myself. But then i realize that really i don't have the time to buid a solid, secure system. So i was thinking, maybe like cms do exist, maybe there are also database management systems. I mean a layer of abstraction over the database that allow you manage data, manage access to data, create widgets with and expand the data. Maybe with a frontend to use this data and a backend to manage it. that is a cms but not based on pages and post but on data! Moreover, i would like some standard solution, because my IT management position ends this year, so i need something that will be easy to use and expand even by someone that is not a developer. So do exist something that fit my need? PS: i would really like some modern and visually pleasant solution, javascritp and ajax heavy and that relies the fewest is possible on server and reloading of the pages.

    Read the article

  • How do you approach database design?

    - by bron
    I am primarily a web developer and I have a couple of personal projects which I want to kick off. One thing that is bugging me is database design. I have gone through in school db normalization and stuff like that but it has been a couple of years ago and I have never had experience with relational database design except for school. So how you do you approach database from a web app perspective? How do you begin and what do you look out for? What are flags for caution?

    Read the article

  • Should we have a database independent SQL like query language in Django? [closed]

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • How to modernize an enormous legacy database?

    - by smayers81
    I have a question, just looking for suggestions here. So, my application is 'modernizing' a desktop application by converting it to the web, with an ICEFaces UI and server side written in Java. However, they are keeping around the same Oracle database, which at current count has about 700-900 tables and probably a billion total records in the tables. Some individual tables have 250 million rows, many have over 25 million. Needless to say, the database is not scaling well. As a result, the performance of the application is looking to be abysmal. The architects / decision makers-that-be have all either refused or are unwilling to restructure the persistence. So, basically we are putting a fresh coat of paint on a functional desktop application that currently serves most user needs and does so with relative ease and quick performance. I am having trouble sleeping at night thinking of how poorly this application is going to perform and how difficult it is going to be for everyday users to do their job. So, my question is, what options do I have to mitigate this impending disaster? Is there some type of intermediate layer I can put in between the database and the Java code to speed up performance while at the same time keeping the database structure intact? Caching is obviously an option, but I don't see that as being a cure-all. Is it possible to layer a NoSQL DB in between or something?

    Read the article

  • Oracle Database 11gR2 11.2.0.3 Certified with E-Business Suite on Windows

    - by John Abraham
    As a follow up to our original certification announcement, Oracle Database 11g Release 2 (11.2.0.3) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following Microsoft Windows operating systems: Release 12.1 (12.1.1 and higher) Microsoft Windows Server (32-bit) (2003, 2008) Microsoft Windows x64 (64-bit) (20031, 20081, 2008 R22) Release 12.0 (12.0.4 and higher) Microsoft Windows Server (32-bit) (2003) Microsoft Windows x64 (64-bit) (2003, 2008)1 Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher) Microsoft Windows Server (32-bit) (2003, 20081) Microsoft Windows x64 (64-bit) (2003, 2008, 2008 R2)1 Notes 1: This OS is a 'database tier only' or 'split tier configuration' platform where the application tier must be on a fully certified E-Business Suite platform. 2: This OS is a 'database tier only' platform for Release 11i. For 12.1.1 or higher, it is also supported on the application tier via the migration process outlined in My Oracle Support Document 1188535.1. Pending Certification E-Business Suite 12.0 with 11.2.0.3 Split Tier Certification on Microsoft Windows x64 (64-bit) (2008 R2) is in progress and will be announced separately. This announcement for Oracle E-Business Suite 11i and R12 includes: Real Application Clusters (RAC) Oracle Database Vault Transparent Data Encryption (Column Encryption) TDE Tablespace Encryption Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 References MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g MOS Document 1188535.1 - Migrating Oracle E-Business Suite R12 to Microsoft Windows Server 2008 R2 Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites. Related Articles Database 11.2.0.2 Certified with EBS R12 on IBM: Linux on System z EBS R12 Certified with Database 11gR2 on SLES 11 11gR2 11.2.0.3 Database Certified with E-Business Suite

    Read the article

  • No database selected error in CodeIgniter running on MAMP stack

    - by Apophenia Overload
    First off, does anyone know of a good place to get help with CodeIgniter? The official community forums are somewhat disappointing in terms of getting many responses. I have ci installed on a regular MAMP stack, and I’m working on this tutorial. However, I have only gone through the Created section, and currently I am getting a No database selected error. Model: <?php class submit_model extends Model { function submitForm($school, $district) { $data = array( 'school' => $school, 'district' => $district ); $this->db->insert('your_stats', $data); } } View: <?php $this->load->helper('form'); ?> <?php echo form_open('main'); ?> <p> <?php echo form_input('school'); ?> </p> <p> <?php echo form_input('district'); ?> </p> <p> <?php echo form_submit('submit', 'Submit'); ?> </p> <?php echo form_close(); ?> Controller: <?php class Main extends controller { function index() { // Check if form is submitted if ($this->input->post('submit')) { $school = $this->input->xss_clean($this->input->post('school')); $district = $this->input->xss_clean($this->input->post('district')); $this->load->model('submit_model'); // Add the post $this->submit_model->submitForm($school, $district); } $this->load->view('main_view'); } } database.php $db['default']['hostname'] = "localhost:8889"; $db['default']['username'] = "root"; $db['default']['password'] = "root"; $db['default']['database'] = "stats_test"; $db['default']['dbdriver'] = "mysql"; $db['default']['dbprefix'] = ""; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = ""; $db['default']['char_set'] = "utf8"; $db['default']['dbcollat'] = "utf8_general_ci"; config.php $config['base_url'] = "http://localhost:8888/ci/"; ... $config['index_page'] = "index.php"; ... $config['uri_protocol'] = "AUTO"; So, how come it’s giving me this error message? A Database Error Occurred Error Number: 1046 No database selected INSERT INTO `your_stats` (`school`, `district`) VALUES ('TJHSST', 'FairFax') Is there any way for me to test if CodeIgniter can actually detect the mySQL databases I've created with phpMyAdmin in my MAMP stack?

    Read the article

  • Fiscal year handling strategies in database design

    - by Sapphire
    By fiscal year I mean all the data in the database (in all tables) that occurred in the particular year. Lets say that we are building an application that allows user to choose from different years. What way of implementing this would you prefer, and why: Separate fiscal year data based on multiple separate database instances (for example, on every fiscal year start you could create a new instance with no data) Have everything in one database, but with logic that automatically separates records from different years. Personally, I have "seen" both methods, and I would choose the second. The only argument I can think of for the first method is to have less records in case that these are really big databases - but still, you could "archive" old records by joining them in summaries or by some other way. What do you think?

    Read the article

  • Database design for very large amount of data

    - by Hossein
    Hi, I am working on a project, involving large amount of data from the delicious website.The data available is at files are "Date,UserId,Url,Tags" (for each bookmark). I normalized my database to a 3NF, and because of the nature of the queries that we wanted to use In combination I came down to 6 tables....The design looks fine, however, now a large amount of data is in the database, most of the queries needs to "join" at least 2 tables together to get the answer, sometimes 3 or 4. At first, we didn't have any performance issues, because for testing matters we haven't had added too much data in the database. No that we have a lot of data, simply joining extremely large tables does take a lot of time and for our project which has to be real-time is a disaster.I was wondering how big companies solve these issues.Looks like normalizing tables just adds complexity, but how does the big company handle large amounts of data in their databases, don't they do the normalization? thanks

    Read the article

  • Database structure - To join or not to join

    - by Industrial
    Hi! We're drawing up the database structure with the help of mySQL Workbench for a new app and the number of joins required to make a listing of the data is increasing drastically as the many-to-many relationships increases. The application will be quite read-heavy and have a couple of hundred thousand rows per table. The questions: Is it really that bad to merge tables where needed and thereby reducing joins? Should we start looking at horizontal partitioning? (in conjunction with merging tables) Is there a better way then pivot tables to take care of many-to-many relationships? We discussed about instead storing all data in serialized text columns and having the application make the sorting instead of the database, but this seems like a very bad idea, even though that the database will be heavily cached. What do you think? Thanks!

    Read the article

  • Database Design for multiple users site

    - by jl
    Hi, I am required to work on a php project that requires the database to cater to multiple users. Generally, the idea is similar to what they have for carbonmade or basecamp, or even wordpress mu. They cater to multiple users, whom are also owners of their accounts. And if they were to cancel/terminate their account, anything on the pages/database would be removed. I am not quite sure how should I design the database? Should it be: separate tables for individual user account separate databases for individual user account or otherwise? Kindly advise me for the best approach to this issue. Thank you very much.

    Read the article

  • relational database: how to design this table

    - by donpal
    I'm a database newbie designing a database. I'll use SO to ask my question because it's easier to ask it on something that you can see already, but it's not the same, it will just help me understand the right approach. As you can see, there are many questions here and each can have many answers. How should I store the answers in a table? Should I store all the answers in the SAME table with a unique id (make it the key) and just a new field for the question id? What if there are 100,000 answers like there is here? Do I still store them in 1 table? What keys should I use to minimize search time when I want to search for the answers of a specific question? The database is both read and write if that makes any difference in this case.

    Read the article

  • Database security / scaling question

    - by orokusaki
    Typically I use a database such as MySQL or PostGreSQL on the same machine as the application using it, which makes access easy and secure. I'm just now building the first site that will have a separate physical database server (later this year it will). I'm wondering 3 things: (security) What things should I look into for starters pertaining to security of accessing a separate machine's database? (scalability) Are their scalability issues that I should think about pertaining to this (technology agnostic)? (more ServerFaultish but related) If starting the DB out on the same physical server (using a separate VMWare VM) and later moving to a different physical server, are there implicit problems that I'll have to deal with? Isn't another VM still accessed via localhost? If these questions are completely ludicrous, I apologize to you DB experts.

    Read the article

  • Database with users design

    - by 1110
    I am in database design development phase. Application will work with large number of users (LARGE :)) I designed 80% of database but I have one Users table which is connected to everything else: Users {UserId, FirstName, LastName, Username, Password, PasswordQuestion, PasswordAnswer, Gender, RoleId, LastLoginDate etc etc} I saw asp.net membership database structure where Users and Membership are two tables. My questions are: Should I use one users table with all users data in it or more tables? If answer is 'more tables', what tables to use? Any advice on how to structure relation between those tables? This is sample relation that I have, and trying to improve. I don't understand why user and userChild are separated tables?

    Read the article

  • MYSQL Event to update another database table

    - by Lee
    Hey All, I have just taken over a project for a client and the database schema is in a total mess. I would like to rename a load of fields make it a relationship database. But doing this will be a painstaking process as they have an API running of it also. So the idea would be to create a new database and start re-writing the code to use this instead. But I need a way to keep these tables in sync during this process. Would you agree that I should use MYSQL EVENT's to keep updating the new table on Inserts / updates & deletes?? Or can you suggest a better way?? Hope you can advise !! thanks for any input I get

    Read the article

  • Technology Selection for a dynamic product

    - by Kuntal Shah
    We are building a product for Procurement Domain in JAVA. Following are the main technical requirements. Platform Independent Database Independent Browser Independent In functional requirements the product is very dynamic in nature. The main reason being the procurement process around the world is different from client to client. Briefly we need to have a dynamic workflow engine and a dynamic template engine. The workflow engine by which we can define any kind of workflows and the template engine allows us to define any kind of data structures and based on definition it can get the user input through workflow. We have been developing this product for almost 2 years. It has been a long time till we can get down with the dynamics of requirements. Till now we have developed a basic workflow and template engine and which is in use at one of the client. We have been using following technologies. GWT-Ext (Front End Framework) Hibernate (Database Layer) In between we have faced some issues with GWT-Ext (mainly browser compatibility) and database optimization due to sub classing in hibernate. For resolving GWT-Ext issue, which a dying community so we decided to move to SmartGWT. In SmartGWT we faced issues related to loading and now we are able to finalize that GWT 2.3 will be the way to go as the library is rich and performance is upto the mark. We are able to almost finalize GWT-Spring based front and middle layer. In hibernate, we found main issues with sub-classing due to that it was throwing astronomical queries and sometimes it would stop firing any queries for 5-10 seconds or may be around 30 seconds and then resume again. Few days back I came to one article related to ORM. I am a traditional .Net SQL developer and I have always worked with relational database. Reading through this article, I also found it relating to the issues I face. I am still not completely convinced of using hibernate and this article just supported my opinion. Following are the questions for which I am looking for an answer. Should we be going with Hibernate in case of dynamic database requirements and the load of the data will be heavy in future? How can we partition the data, how we can efficiently join the data, how we can optimize the queries? If the answer is no then how do we achieve database independence? Is our choice related to GWT and Spring proper or do we need to change that too? Should we use any other key value pair database if the data is dynamic in nature and it is very difficult to make it relational?

    Read the article

  • Oracle Database In-Memory Launch Featuring Larry Ellison – June 10

    - by Roxana Babiciu
    For more than three-and-a-half decades, Oracle has defined database innovation. With our market-leading technologies, customers have been able to out-think and out-perform their competition. Soon they will be able to do that even faster. At a live launch event and simultaneous webcast, Larry Ellison will reveal the future of the database. Promote this strategic event to customers. Registration for the live event begins at 9am PT.

    Read the article

  • Oracle Database In-Memory Launch Featuring Larry Ellison – June 10

    - by Cinzia Mascanzoni
    For more than three-and-a-half decades, Oracle has defined database innovation. With our market-leading technologies, customers have been able to out-think and out-perform their competition. Soon they will be able to do that even faster. At a live launch event and simultaneous webcast, Larry Ellison will reveal the future of the database. Promote this strategic event to partners and customers. Registration for the live event begins at 5pm GMT, 6pm CET.

    Read the article

  • Database source control

    - by Bojan Skrchevski
    Should database files(scripts etc.) be on source control? If so, what is the best method to keep it and update it there? Is there even a need for database files to be on source control since we can put it on a development server where everyone can use it and make changes to it if needed. But, then we can't get it back if someone messes it up. What approach is best used for databases on source-control?

    Read the article

  • SQL Saturday Birmingham #328 Database Design Precon In One Week

    - by drsql
    On September 22, I will be doing my "How to Design a Relational Database" pre-conference session in Birmingham, Alabama. You can see the abstract here if you are interested, and you can sign up there too, naturally. At just $100, which includes a free ebook copy of my database design book, it is a great bargain and I totally promise it will be a little over 7 hours of talking about and designing databases, which will certainly be better than what you do on a normal work day, even a Friday....(read more)

    Read the article

  • How to use database adapters' cursors safely?

    - by lvictorino
    I started to use psycopg2 to connect my little python script to a PostgreSQL database few days ago. After some research I found that a lot of database connector, like psycopg, work using cursors. I know what is a cursor and how to use it. But I still wonder if it's safe to use the same cursor all along the script life. Is it safe? Or would it be preferable to use a different cursor for each query?

    Read the article

  • 2nd JULY 2013: Oracle Database 12c Technical Training Webcast

    - by Cinzia Mascanzoni
    This session will focus on the specific needs of our Oracle partner community and developers. We'll provide insight into the many features and capabilities your customers will be looking to leverage in their own environments. Topics include: Consolidation and Cloud Strategies Deep dive into the key Database Options Migrating to Oracle Database 12c Click here for details on how to join the webcast.

    Read the article

  • Adatbázis szerver konszolidáció Oracle technológiákkal - eroforrás allokálás

    - by Lajos Sárecz
    Szerver konszolidációnál alapmegoldás a virtualizáció, pedig az Oracle Database rendelkezik olyan képességekkel, melyekkel a virtualizáció elonyeit élvezhetjük, ám teljesítményben felülmúljuk azt. Több adatbázis konszolidációját meg lehet oldani egy nagy szerveren, vagy egy több szerverbol álló klaszteren. Bármelyik megoldást is választjuk (ezek elonyeivel és hátrányaival most nem foglalkozok), az egyik legfontosabb megoldandó probléma, hogy biztonsággal el tudjuk oket szeparálni akár adatbiztonsági, akár eroforrás kezelési szempontból. A szoftveres és hardveres virtualizációk lehetové teszik, hogy a szerver eroforrásait több virtuális szerver között felosszuk, ezáltal elszeparálhatók a párhuzamosan futó adatbázis példányok. Ezek a megoldások általában költségesek, plusz adminisztrációt jelentenek és teljesítmény csökkenést okoznak. Az alábbiakban röviden összeszedem, hogy az Oracle Database milyen eroforrás szeparációs technológiákkal rendelkezik, melyek jól használhatók adatbázis konszolidáció esetén: Adatbázis szolgáltatások: Azt talán minden Oracle adatbázis-kezelovel foglalkozó szakérto tudja, hogy akliensek az adatbázist az adatbázis szolgáltatás nevével érik el. Alapértelmezetten minden adatbázis egyetlen szolgáltatással rendelkezik, mely automatikusan a 'global database name' paraméterrel megegyezo nevet kapja az adatbázis létrehozásakor. Ugyanakkor egy adatbázishoz több szolgáltatás név is rendelheto. A szolgáltatásokkal csoportosíthatók a különbözo feladatokat végrehajtó kliensek, és a szolgáltatásokhoz rendelhetjük hogy melyik kliens csoportnak mennyi rendszer eroforrást allokálunk. Klaszteres adatbázisok (RAC) esetén egy szolgáltatás több adatbázis példányhoz (szerverhez) kapcsolódhat, amivel valós terheléstol függo terhelés elosztás valósítható meg (itt már szerepet kap egyébként a Resource Manager is, lásd késobb). Az alkalmazás számára irrelevánssá válik, hogy az adott szolgáltatást mely szerver szolgálja ki. A szolgáltatásokhoz kapcsolódó eroforrások menet közben dinamikusan bovíthetok, de kezelik a kieso eroforrások hiányát is (failover). Database Resource Manager: Az Oracle Database Resource Manager az adatbázis szintjén kezeli az eroforrásokat, a CPU használatot szabályozza az adatbázis terhelés kontrolljával. A Resource Manager egy CPU-n adott pillanatban csak egyetlen Oracle processz futtatását engedélyezi, miközben a többit várakoztatja (ahogy az egy operációs rendszer ütemezojében is muködik). A Resource Manager csak akkor lép muködésbe, amikor a CPU terhelése eléri a 100%-ot. Ekkor a Resource Plan-nek megfeleloen korlátozhatja az egyes eroforrás csoportok számára elérheto eroforrás (CPU) mennyiségét. Instance Caging: A Resource Manager részeként az Oracle Database 11gR2-tol elérheto Instance Caging technológiával virtualizáció és operációs rendszer szintu eroforrás felosztás nélkül az adatbázis példány szintjén lehet szabályozni az allokált CPU számot. Erre akkor lehet szükség, ha egy szerveren több példány futtatására van szükség. A Resource Manager bekapcsolásával és a cpu_count paraméter beállításával lehet adatbázis példányonként aktiválni az Instance Caging funkcionalitást. A cpu_count egy dinamikus paraméter, célszeru arra az értékre állítani, ahány CPU-t az adott adatbázis példány maximálisan igényelhet. Lehetoség van túlméretezni a példányok számára rendelkezésre álló processzorok számát. Például egy 4 CPUs- szerver esetében ha van 3 példányunk, mindháromnak adhatunk 3 CPU-t. Azonban ha mindegyik terhelés alatt van, akkor a példány számára maximum allokált CPU szám osztva összes allokált CPU számmal arányban részesül a processzorból, ami a példában 33,33%, azaz 1,33 CPU. Input Output Resource Manager (IORM):Nem csak a processzorok használatát szabályozhatjuk, lehetoség van a megosztott storage eroforrásainak felosztására is. Az Input Output Resorce Manager (IORM) alkalmazásával storage szinten tudjuk szabályozni az adatbázisok közötti és azokon belüli minimális I/O szinteket. Database Vault: Ugyanazon adatbázisba konszolidált alkalmazások esetén a rendszergazda szerepkörök szeparálása lehetséges az Oracle Database Vault technológiával. Ezzel elérheto az, hogy biztonságosan konszolidáljuk adatbázisainkat úgy, hogy minden adminisztrátor csak a hozzá tartozó adatokat, objektumokat lássa, módosíthassa.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >