Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 119/1330 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Do you test your SQL/HQL/Criteria ?

    - by 0101
    Do you test your SQL or SQL generated by your database framework? There are frameworks like DbUnit that allow you to create real in-memory database and execute real SQL. But its very hard to use(not developer-friendly so to speak), because you need to first prepare test data(and it should not be shared between tests). P.S. I don't mean mocking database or framework's database methods, but tests that make you 99% sure that your SQL is working even after some hardcore refactoring.

    Read the article

  • .Net Application & Database Modularity/Reuse

    - by Martaver
    I'm looking for some guidance on how to architect an app with regards to modularity, separation of concerns and re-usability. I'm working on an application (ASP.Net, C#) that has distinctly generic chunks of functionality, that I'd love to be able to lift out, all layers, into re-usable components. This means the module handles the database schema, data access, API, everything so that the next time I want to use it I can just register the module and hook into it. Developing modules of re-usable functionality is a no-brainer, but what is really confusing me is what to do when it comes to handling a core re-usable database schema that serves the module's functionality. In an ideal world, I would register a module and it would ensure that the associated database schema exists in the DB. I would code on the assumption that the tables exist, calling the module's functionality through the DLL, agnostic of the database layer. Kind of like Enterprise Library's Caching/Logging Application Block, which can create a DB schema in the target DB to use as a data store. My Questions is: What do you think is the best way to achieve this, firstly, in terms design architecture, and secondly solution structure. What patterns/frameworks do you know that exist & support this kind of thing? My thoughts so far: I mostly use Entity Framework and SQL Server DB Projects. I thought about a 'black box' approach to modules of functionality. I could use use a code-first approach in EF4, and use the ObjectContext to create a database when the module is initialized. However this means that all of the entities that my module encapsulates would be disconnected from the rest of the application because they belonged to an abstracted ObjectContext. Further - Creating appropriate indexes and references between domain entities and the module's entities would be impossible to do practically. I've thought of adopting Enterprise Library and creating my own Application Blocks. I'm not sure how this would play nice with Entity Framework (if at all) though. I like the idea of building on proven patterns & practices to encapsulate established, reusable functionality. I thought of abandoning Entity Framework for the Module, and just creating a separate DB schema for the module with its own set of stored procedures & ADO.Net. Then deploying the script at run-time if interrogation shows that it doesn't exist. But once again, for application developing outside of the application, I would want to use Entity Framework and I would have to use the module separately, disconnected from the domain ObjectContext. Has anyone had experience developing these sorts of full-stack modules? What advice can you offer? Am I biting off more than I can chew?

    Read the article

  • How to mount an Oracle database to new instance?

    - by Vimvq1987
    I have an instance of Oracle 10g R2 installed on Windows Server 2003. This instance was running an database, which does not have any backup. Now the OS went down, and could not repaired, all I got is the running files of the old instance. How can I restore the database from these files to new instance? A step-by-step guide will be much appreciated because I'm new with Oracle. Thank you very much

    Read the article

  • Cannot Attach Database in SQL Express More Than Two Directories Deep?

    - by Dave Mackey
    I have a database in one of my Visual Studio Express projects. I want to attach it to my local SQLEXPRESS instance so I can run aspnet_regsql on it and add the membership database. When I select Attach Databases and then attempt to browse to the files (C:\Users\username\Documents\Visual Studio 2010\Projects\nameofproject) it only lets me navigate to C:\Users\username...Why? How can I fix this?

    Read the article

  • How do I convert a Mac OS Filemaker 2 database to a recent FM or Bento db, preserving the relations

    - by willc2
    I'm hoping for more than just exporting the data, I would like to preserve the relation between the databases. This is for a friend's legacy database that tracks monthly fees from a list of clients. I have the original FM database file on hand, but not the machine it ran on with the old version of Filemaker 2. Recent versions won't import it, saying it's too old. If there is a Mac-only solution that would make things simpler for me.

    Read the article

  • When and how often to start connection to database in php?

    - by AndHeiberg
    When and how often is it good practice to start the connection to your database in php? I'm new to databases, and I'm wondering when I should start by database connection. I'm creating a api with an index, controllers and model. Should I start the connection in the index and then pass it to all the other files, start the connection at the top of all files and call it as a global in functions as needed or start and end the connection in every function?

    Read the article

  • How to test whether an image is already in cache? [migrated]

    - by Evik James
    I am developing a web site that has a lot of large, high-quality images on the home page. On the home page, there is an image carousel that pulls ten high quality images from a database. The images can be 1 meg each. The carousel images aren't my problem (right now), but it has something to do with it. The problem I am trying to address right now is that I use a high quality background image that I want to continue using, it's about 180k. If I have the background in cache on the home page, I want to use it. If not, then I don't want to use it on the home page. I'll load it from a different page. When the user returns to the home page, and the background image is in cache, I want to use it. Can I test whether an image is already in cache and if so, dynamically load or NOT load based on that? You can see the home page here: http://flyingpiston2012-com.securec37.ezhostingserver.com/

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • What does the information_schema database represent?

    - by Mirage
    I have one database in mysql. But when i log into phpMyAdmin , it shows another database called information_schema. Is that database always present with one database? I mean to say is there a copy of information_schema for every database present in mysql or is there one database called inforemation_schema per mysql server? If i modify this information_schema database how will that affect my current database?

    Read the article

  • How do I efficiently write a "toggle database value" function in AJAX?

    - by AmbroseChapel
    Say I have a website which shows the user ten images and asks them to categorise each image by clicking on buttons. A button for "funny", a button for "scary", a button for "pretty" and so on. These buttons aren't exclusive. A picture can be both funny and scary. The user clicks the "funny" button. An AJAX request is sent off to the database to mark that image as funny. The "funny" button lights up, by assigning a class in the DOM to mark it as "on". But the user made a mistake. They meant to hit the next button over. They should click "funny" again to turn it off, right? At this point I'm not sure whats the most efficient way to proceed. The database knows that the "funny" flag is set, but it's inefficient to query the database every time a button is clicked to say, is this flag set or not, then go on with a second database call to toggle it. Should I infer the state of the database flag from the DOM, i.e. if that button has the class "on" then the flag must be set, and branch at that point? Or would it be better to have a data structure in Javascript in the page which duplicates the state of each image in the database, so that every time I set the database flag to true, I also set the value in the Javascript data to true and so on?

    Read the article

  • Which database I can used and relationship in it ??

    - by mimo-hamad
    My projece make me confused which I didn't find clear things that make me understand the required database and the relationships in it So, would a super one help me to solve it ?!! ;D this is required: 1) Model the data stored in the database (Identify the entities, roles, relationships, constraints, etc.) 2) Write the Oracle commands to create the database, find appropriate data, and populate the database 3) Write five different queries on your database, using the SELECT/FROM/WHERE construct provided in SQL. Your five queries should illustrate several different aspects of database querying, such as: a. Queries over more than one relation (by listing more than one relation in the FROM clause) b. Queries involving aggregate functions, such as SUM, COUNT, and AVG c. Queries involving complicated selects and joins d. Queries involving GROUP BY, HAVING or other similar functions. e. Queries that require the use of the DISTINCT keyword. And this the condition that we need to determine it to solve the required Q's above : 5) It is desired to develop an Internet membership club to buy products at special prices online. To join, new members must be referred by another existing member of the club. The system will keep the following information for each member: The member ID, referring member, birth date, member name, address, phone, mobile, credit card type, number and expiration date. The items are always shipped to the member's address noted in the membership application. The shipping fees will differ for each order.For each item to be requested, the member will select an item from a long list of possible items. For each item in the database, we store an item ID, an item name, description, and list price. The list price will be different from the actual sale price. The available quantity and the back-ordered quantity (the back-ordered quantity is the quantity on-order by the club from its suppliers) is also noted

    Read the article

  • Do I need a spatial index in my database?

    - by Sanoj
    I am designing an application that needs to save geometric shapes in a database. I haven't choosen the database management system yet. In my application, all database queries will have an bounding box as input, and as output I want all shapes within that database. I know that databases with a spatial index is used for this kind of application. But in my application there will not be any queries of type "give me objects nearby x/y" or other more complex queries that are useful in a GIS application. I am planning of having a database without a spatial index and have queries looking like: SELECT * FROM shapes WHERE x < max_x AND x > min_x AND y < max_y AND y > min_y And have an index on the columns x (double) and y (double). As long I can see, I don't really need a database with an spatial index, howsoever my application is close to that kind of applications. And even if I would like to have nearby queries, then I could create a big enough bounding box around that point. Or will this lead to poor performance? Do I really need a spatial database? And when is a spatial index needed?

    Read the article

  • VLOOKUP in Excel, part 2: Using VLOOKUP without a database

    - by Mark Virtue
    In a recent article, we introduced the Excel function called VLOOKUP and explained how it could be used to retrieve information from a database into a cell in a local worksheet.  In that article we mentioned that there were two uses for VLOOKUP, and only one of them dealt with querying databases.  In this article, the second and final in the VLOOKUP series, we examine this other, lesser known use for the VLOOKUP function. If you haven’t already done so, please read the first VLOOKUP article – this article will assume that many of the concepts explained in that article are already known to the reader. When working with databases, VLOOKUP is passed a “unique identifier” that serves to identify which data record we wish to find in the database (e.g. a product code or customer ID).  This unique identifier must exist in the database, otherwise VLOOKUP returns us an error.  In this article, we will examine a way of using VLOOKUP where the identifier doesn’t need to exist in the database at all.  It’s almost as if VLOOKUP can adopt a “near enough is good enough” approach to returning the data we’re looking for.  In certain circumstances, this is exactly what we need. We will illustrate this article with a real-world example – that of calculating the commissions that are generated on a set of sales figures.  We will start with a very simple scenario, and then progressively make it more complex, until the only rational solution to the problem is to use VLOOKUP.  The initial scenario in our fictitious company works like this:  If a salesperson creates more than $30,000 worth of sales in a given year, the commission they earn on those sales is 30%.  Otherwise their commission is only 20%.  So far this is a pretty simple worksheet: To use this worksheet, the salesperson enters their sales figures in cell B1, and the formula in cell B2 calculates the correct commission rate they are entitled to receive, which is used in cell B3 to calculate the total commission that the salesperson is owed (which is a simple multiplication of B1 and B2). The cell B2 contains the only interesting part of this worksheet – the formula for deciding which commission rate to use: the one below the threshold of $30,000, or the one above the threshold.  This formula makes use of the Excel function called IF.  For those readers that are not familiar with IF, it works like this: IF(condition,value if true,value if false) Where the condition is an expression that evaluates to either true or false.  In the example above, the condition is the expression B1<B5, which can be read as “Is B1 less than B5?”, or, put another way, “Are the total sales less than the threshold”.  If the answer to this question is “yes” (true), then we use the value if true parameter of the function, namely B6 in this case – the commission rate if the sales total was below the threshold.  If the answer to the question is “no” (false), then we use the value if false parameter of the function, namely B7 in this case – the commission rate if the sales total was above the threshold. As you can see, using a sales total of $20,000 gives us a commission rate of 20% in cell B2.  If we enter a value of $40,000, we get a different commission rate: So our spreadsheet is working. Let’s make it more complex.  Let’s introduce a second threshold:  If the salesperson earns more than $40,000, then their commission rate increases to 40%: Easy enough to understand in the real world, but in cell B2 our formula is getting more complex.  If you look closely at the formula, you’ll see that the third parameter of the original IF function (the value if false) is now an entire IF function in its own right.  This is called a nested function (a function within a function).  It’s perfectly valid in Excel (it even works!), but it’s harder to read and understand. We’re not going to go into the nuts and bolts of how and why this works, nor will we examine the nuances of nested functions.  This is a tutorial on VLOOKUP, not on Excel in general. Anyway, it gets worse!  What about when we decide that if they earn more than $50,000 then they’re entitled to 50% commission, and if they earn more than $60,000 then they’re entitled to 60% commission? Now the formula in cell B2, while correct, has become virtually unreadable.  No-one should have to write formulae where the functions are nested four levels deep!  Surely there must be a simpler way? There certainly is.  VLOOKUP to the rescue! Let’s redesign the worksheet a bit.  We’ll keep all the same figures, but organize it in a new way, a more tabular way: Take a moment and verify for yourself that the new Rate Table works exactly the same as the series of thresholds above. Conceptually, what we’re about to do is use VLOOKUP to look up the salesperson’s sales total (from B1) in the rate table and return to us the corresponding commission rate.  Note that the salesperson may have indeed created sales that are not one of the five values in the rate table ($0, $30,000, $40,000, $50,000 or $60,000).  They may have created sales of $34,988.  It’s important to note that $34,988 does not appear in the rate table.  Let’s see if VLOOKUP can solve our problem anyway… We select cell B2 (the location we want to put our formula), and then insert the VLOOKUP function from the Formulas tab: The Function Arguments box for VLOOKUP appears.  We fill in the arguments (parameters) one by one, starting with the Lookup_value, which is, in this case, the sales total from cell B1.  We place the cursor in the Lookup_value field and then click once on cell B1: Next we need to specify to VLOOKUP what table to lookup this data in.  In this example, it’s the rate table, of course.  We place the cursor in the Table_array field, and then highlight the entire rate table – excluding the headings: Next we must specify which column in the table contains the information we want our formula to return to us.  In this case we want the commission rate, which is found in the second column in the table, so we therefore enter a 2 into the Col_index_num field: Finally we enter a value in the Range_lookup field. Important:  It is the use of this field that differentiates the two ways of using VLOOKUP.  To use VLOOKUP with a database, this final parameter, Range_lookup, must always be set to FALSE, but with this other use of VLOOKUP, we must either leave it blank or enter a value of TRUE.  When using VLOOKUP, it is vital that you make the correct choice for this final parameter. To be explicit, we will enter a value of true in the Range_lookup field.  It would also be fine to leave it blank, as this is the default value: We have completed all the parameters.  We now click the OK button, and Excel builds our VLOOKUP formula for us: If we experiment with a few different sales total amounts, we can satisfy ourselves that the formula is working. Conclusion In the “database” version of VLOOKUP, where the Range_lookup parameter is FALSE, the value passed in the first parameter (Lookup_value) must be present in the database.  In other words, we’re looking for an exact match. But in this other use of VLOOKUP, we are not necessarily looking for an exact match.  In this case, “near enough is good enough”.  But what do we mean by “near enough”?  Let’s use an example:  When searching for a commission rate on a sales total of $34,988, our VLOOKUP formula will return us a value of 30%, which is the correct answer.  Why did it choose the row in the table containing 30% ?  What, in fact, does “near enough” mean in this case?  Let’s be precise: When Range_lookup is set to TRUE (or omitted), VLOOKUP will look in column 1 and match the highest value that is not greater than the Lookup_value parameter. It’s also important to note that for this system to work, the table must be sorted in ascending order on column 1! If you would like to practice with VLOOKUP, the sample file illustrated in this article can be downloaded from here. Similar Articles Productive Geek Tips Using VLOOKUP in ExcelImport Microsoft Access Data Into ExcelImport an Access Database into ExcelCopy a Group of Cells in Excel 2007 to the Clipboard as an ImageShare Access Data with Excel in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • ODI - Creating a Repository in a 12c Pluggable Database

    - by David Allan
    To install ODI 11g into an Oracle 12c pluggable database, one way is to connect using a TNS string to the pluggable database service that is executing. For example when I installed my master repository, I used a JDBC URL such as; jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=mydbserver)(PORT=1522)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PDBORA12.US.ORACLE.COM)))   I used the above approach rather than the host:port:sid which is a common mechanism many users use to quickly get up and going. Below you can see the repository creation wizard in action, I used the 11g release and simply installed the master and work repository into my pluggable database. Be wise with your repository IDs, I simply used the default, but you should be aware that these are key in larger deployments. The database in 12c has much more tighter control on users and resources, so just getting the user creating with sufficient resource on tablespaces etc in 12c was a little more work. Once you have the repositories up and running, then the fun starts using the 12c features. More to come.

    Read the article

  • Database Developers Can Now Save 20%

    - by stephen.garth
    Database developers can now increase productivity and save money at the same time. For a limited time, Oracle Store is offering a 20% discount on Oracle SQL Developer Data Modeler. Just enter the code SQLDDM at checkout to get the discount. Oracle SQL Developer Data Modeler is an independent, standalone product with a full spectrum of data and database modeling tools and utilities, including modeling for Entity Relationship Diagrams (ERD), Relational (database design), Data Type and Multi-dimensional modeling, full forward and reverse engineering and DDL code generation. SQL Developer Data Modeler can connect to any supported Oracle Database and is platform independent. Save 20% on Oracle SQL Developer Data Modeler at Oracle Store - Discount Code SQLDDM Find out more about Oracle SQL Developer and Oracle SQL Developer Data Modeler var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • HealthSouth Upgrades to Oracle Database 11g Release 2 and Oracle RAC

    - by jenny.gelhausen
    HealthSouth Corporation, the nation's largest provider of inpatient rehabilitation services, has upgraded to Oracle Database 11g Release 2 underneath PeopleSoft Enterprise Human Capital Management. Additionally, HealthSouth improved the availability and performance of its Oracle PeopleSoft Enterprise applications and Enterprise Data Warehouse using Oracle Database 11g and Oracle Real Application Clusters. Oracle Database options -- Oracle Advanced Compression and Oracle Partitioning are key to HealthSouth's data lifecycle management practices and to utilizing storage systems more efficiently. Using compression on both partitioned as well as non-partitioned tables in its data warehouse, HealthSouth has seen a 4X storage reduction without any cost to performance. "Oracle Database 11g, along with Oracle Real Application Clusters, Advanced Compression and Partitioning, all lend themselves to delivering highly available, performant data warehousing," said Henry Lovoy, Data Manager, HealthSouth Corporation. Press Release var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Oracle Database 12c By Example – SQL Developer and Multitenant

    - by thatjeffsmith
    As you may have heard, Oracle Database 12c is now available. In addition to the binaries and docs going out, we also published a few new Oracle By Example (OBE) chapters. You can find those links here on our product page. Do you know who found these, practically the minute they were published? An enterprising DBA-extraordinaire who was just happening to be presenting at the ODTUG KScope13 conference in New Orleans. He thought it would be a good idea to download the new software over a hotel WIFI, install and create a new multitenant database, watch a few OBEs, and then demo that live for his ‘SQL Developer for DBAs‘ session. Pretty crazy, right? Well, he did it, and I was there to watch. Way cool. You can listen to @leight0nn tell his story in his own words via this ODTUG interview with @oraclenered. In case you’re too giddy to sit through the video, I’ll give you a preview – he succesfully cloned a pluggable database in about a minute with only a couple of clicks using Oracle SQL Developer 3.2.20.09 while connected to a 12c database.

    Read the article

  • Join Oracle Database at Microsoft TechEd next week.

    - by Mandy Ho
    For the past nine years, Oracle has been a proud sponsor of Microsoft TechEd. TechEd is Mircosoft's premier technology conference for IT professionals and developers. This year, Oracle will demonstrate its latest database software for MS Windows, including Oracle Database 11g Enterprise and Express editions, TimesTen and MySQL.  Developers can learn how to develop .Net applications for the Oracle Database using the latest technologies, such as Entity Framework, LINQ and WCF Data Services. Attendees can also learn the new MySQL features enabling rapid installation, GUI Based application design, backup & recovery and much more within a Windows environment. Oracle will have a BOF (Birds of a Feather Session) on Tuesday, June 12, from 3:15 to 4:30. The topic will be Big Data: The Next Frontier for Innovation, Competition and Productivity. Otherwise you can visit Oracle everyday during the expo hours from Mon, June 11 to Thursday, June 14 at our booth #613. Talk to experts on TimesTen and MySQL on Windows and .NET. Also, we will have our 3D interactive demos on Oracle's engineered systems showing off Oracle Exadata, Database Appliance and more. Visit  http://northamerica.msteched.com/ for more information. 

    Read the article

  • Announcing Oracle Database Mobile Server 11gR2

    - by Eric Jensen
    I'm pleased to announce that Oracle Database Mobile Server 11gR2 has been released. It's available now for download by existing customers, or anyone who wants to try it out. New features include: Support for J2ME platforms, specifically CDC platforms including OJEC(this is in addition to our existing support for Java SE and SE Embedded) Per-application integration with Berkeley DB on Android Server-side support for Apache TomEE platform Adding support for Oracle Java Micro Edition Embedded Client (OJEC for short) is an important milestone for us; it enables Database Mobile Server to work with any of the incredibly wide array of devices that run J2ME. In particular, it enables management of  networks of embedded devices, AKA machine to machine (M2M) networks. As these types of networks become more common in areas like healthcare, automotive, and manufacturing, we're seeing demand for Database Mobile Server from new and different areas. This is in addition to our existing array of mobile device use cases. The Android integration feature with Berkeley DB represents the completion of phase I of our Android support plan, we now offer a full set of sync, device and app management features for that platform. Going forward, we plan to continue the dual-focus approach, supporting mobile platforms such as Android, and iOS (hint) on the one hand, and networks of embedded M2M devices on the other. In either case, Database Mobile Server continues to be the best way to connect data-driven applications to an Oracle backend.

    Read the article

  • Flashback Database

    - by Sebastian Solbach (DBA Community)
    Flashback Database bezeichnet die Funktionalität der Oracle Datenbank, die Datenbank zeitlich auf einen bestimmten Punkt, respektive eine bestimmte System Change Number (SCN) zurücksetzen zu können - vergleichbar mit einem Rückspulknopf eines Kassettenrekorders oder der Rücksetztaste eines CD-Players. Mag dieses Vorgehen bei Produktivsystemen eher selten Einsatz finden, da beim Rücksetzten alle Daten nach dem zurückgesetzten Zeitpunkt verloren wären (es sei denn man würde dieser vorher exportieren), gibt es gerade für Test- oder Standby Systeme viele Einsatzmöglichkeiten: Rücksetzten des Systems bei fehlgeschlagenen Applikations-Upgrade Alternatives Point in Time Recovery (PITR) mit anschließendem Roll Forward (besonders geeignet bei Standby Systemen) Testdatenbank mit definiertem, reproduzierbaren Ausgangspunkt (z.B. für Real Application Testing) Datenbank Upgrade Test Einige bestehende Datenbank Funktionalitäten verwenden Flashback Database implizit: Snapshot Standby Reinstanziierung der Standby (z.B. bei Fast Start Failover) Obwohl diese Funktionalität gerade für Standby Systeme und Testsysteme bestens geeignet ist, gibt es eine gewisse Zurückhaltung Flashback Database einzusetzen. Eine Ursache ist oft die Angst vor zusätzlicher Last, die das Schreiben der Flashback Logs erzeugt, sowie der zusätzlich benötigte Plattenplatz. Dabei ist die Last im Normalfall relativ gering (ca. 5%) und auch der zusätzlich benötigte Platz für die Flashback Logs lässt sich relativ genau bestimmen. Ebenfalls wird häufig nicht beachtet, dass es auch ohne das explizite Einschalten der Flashback Logs möglich ist, einen garantieren Rücksetzpunkt (Guaranteed Restore Point kurz GRP) festzulegen, und die Datenbank dann auf diesen Restore Point zurückzusetzen. Das Setzen eines garantierten Rücksetzpunktes funktioniert in 11gR2 im laufenden Betrieb. Wie dies genau funktioniert, welche Unterschiede es zum generellen Einschalten von Flashback Logs gibt, wie man Flashback Database monitoren kann und was es sonst noch zu berücksichtigen gibt, damit beschäftigt sich dieser Tipp.

    Read the article

  • Oracle NoSQL Database: Cleaner Performance

    - by Charles Lamb
    In an earlier post I noted that Berkeley DB Java Edition cleaner performance had improved significantly in release 5.x. From an Oracle NoSQL Database point of view, this is important because Berkeley DB Java Edition is the core storage engine for Oracle NoSQL Database. Many contemporary NoSQL Databases utilize log based (i.e. append-only) storage systems and it is well-understood that these architectures also require a "cleaning" or "compaction" mechanism (effectively a garbage collector) to free up unused space. 10 years ago when we set out to write a new Berkeley DB storage architecture for the BDB Java Edition ("JE") we knew that the corresponding compaction mechanism would take years to perfect. "Cleaning", or GC, is a hard problem to solve and it has taken all of those years of experience, bug fixes, tuning exercises, user deployment, and user feedback to bring it to the mature point it is at today. Reports like Vinoth Chandar's where he observes a 20x improvement validate the maturity of JE's cleaner. Cleaner performance has a direct impact on predictability and throughput in Oracle NoSQL Database. A cleaner that is too aggressive will consume too many resources and negatively affect system throughput. A cleaner that is not aggressive enough will allow the disk storage to become inefficient over time. It has to Work well out of the box, and Needs to be configurable so that customers can tune it for their specific workloads and requirements. The JE Cleaner has been field tested in production for many years managing instances with hundreds of GBs to TBs of data. The maturity of the cleaner and the entire underlying JE storage system is one of the key advantages that Oracle NoSQL Database brings to the table -- we haven't had to reinvent the wheel.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >