Search Results

Search found 41135 results on 1646 pages for 'non relational database'.

Page 12/1646 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • Steps to Mitigate Database Security Worst Practices

    - by Troy Kitch
    The recent Top 6 Database Security Worst Practices webcast revealed the Top 6, and a bonus 7th , database security worst practices: Privileged user "all access pass" Allow application bypass Minimal and inconsistent monitoring/auditing Not securing application data from OS-level user No SQL injection defense Sensitive data in non-production environments Not securing complete database environment These practices are uncovered in the 2010 IOUG Data Security Survey. As part of the webcast we looked at each one of these practices and how you can mitigate them with the Oracle Defense-in-Depth approach to database security. There's a lot of additional information to glean from the webcast, so I encourage you to check it out here and see how your organization measures up.

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Offloading (Some) EBS 12 Reporting to Active Data Guard Instances

    - by Steven Chan
    For most Oracle Database users, Oracle Active Data Guard allows users to:Create a physical standby database for business continuity and disaster recoveryOffload reporting from the production database to the read-only physical standby databaseE-Business Suite customers have been able to use Active Data Guard to create physical standby databases for their EBS environments since the feature was introduced with the 11g Database.  EBS sysadmins can use the generic Active Data Guard documentation to take advantage of the Active Data Guard standby database capabilities.  I am pleased to announce that it is now possible to offload a subset of some ReportWriter-based reports -- but not all -- from a production EBS environment to an Active Data Guard physical standby database.  But before I go into the details of this newly-certified configuration, it's necessary to understand some details about what happens whenever someone attempts to access the E-Business Suite.

    Read the article

  • Faq and Best tips Regarding Learning Database ?

    - by AdityaGameProgrammer
    For a programmer with no prior exposure to databases What would be a good database to learn Oracle vs SQLserver vs MySQLvs PostgreSQL? I have come across lot of discussion MySQL and PostgreSQL and frankly I am confused on which to start with. Are these very different, in the sense if one had to switch, would the exposure to one be counter-productive to learning the other? Is working with databases heavily platform dependent? What exactly do people mean by Data base programming vs. administration? Do people chose databases based on the programming language used for the application developed? In general, Working with databases is it implicit that we work with some server? Does the choice of databases differ when it comes to game development? If so what factors does it differ by? What are the Best Tips that you have found to be useful when learning databases Edit: Some FAQ i had and found the same on SO What should every developer know about databases? Which database if learning from scratch in 2010? For a beginner, is there much difference between MySQL and PostgreSQL What RDBMS should I learn/use? (MySql/SQL Server/Oracle etc.) To what extent should a developer learn database? How are database programmers different from other programmers? what kind of database are used in games?

    Read the article

  • Protecting offline IRM rights and the error "Unable to Connect to Offline database"

    - by Simon Thorpe
    One of the most common problems I get asked about Oracle IRM is in relation to the error message "Unable to Connect to Offline database". This error message is a result of how Oracle IRM is protecting the cached rights on the local machine and if that cache has become invalid in anyway, this error is thrown. Offline rights and security First we need to understand how Oracle IRM handles offline use. The way it is implemented is one of the main reasons why Oracle IRM is the leading document security solution and demonstrates our methodology to ensure that solutions address both security and usability and puts the balance of these two in your control. Each classification has a set of predefined roles that the manager of the classification can assign to users. Each role has an offline period which determines the amount of time a user can access content without having to communicate with the IRM server. By default for the context model, which is the classification system that ships out of the box with Oracle IRM, the offline period for each role is 3 days. This is easily changed however and can be as low as under an hour to as long as years. It is also possible to switch off the ability to access content offline which can be useful when content is very sensitive and requires a tight leash. So when a user is online, transparently in the background, the Oracle IRM Desktop communicates with the server and updates the users rights and offline periods. This transparent synchronization period is determined by the server and communicated to all IRM Desktops and allows for users rights to be kept up to date without their intervention. This allows us to support some very important scenarios which are key to a successful IRM solution. A user doesn't have to make any decision when going offline, they simply unplug their laptop and they already have their offline periods synchronized to the maximum values. Any solution that requires a user to make a decision at the point of going offline isn't going to work because people forget to do this and will therefore be unable to legitimately access their content offline. If your rights change to REMOVE your access to content, this also happens in the background. This is very useful when someone has an offline duration of a week and they happen to make a connection to the internet 3 days into that offline period, the Oracle IRM Desktop detects this online state and automatically updates all rights for the user. This means the business risk is reduced when setting long offline periods, because of the daily transparent sync, you can reflect changes as soon as the user is online. Of course, if they choose not to come online at all during that week offline period, you cannot effect change, but you take that risk in giving the 7 day offline period in the first place. If you are added to a NEW classification during the day, this will automatically be synchronized without the user even having to open a piece of content secured against that classification. This is very important, consider the scenario where a senior executive downloads all their email but doesn't open any of it. Disconnects the laptop and then gets on a plane. During the flight they attempt to open a document attached to a downloaded email which has been secured against an IRM classification the user was not even aware they had access to. Because their new role in this classification was automatically synchronized their experience is a good one and the document opens. More information on how the Oracle IRM classification model works can be found in this article by Martin Abrahams. So what about problems accessing the offline rights database? So onto the core issue... when these rights are cached to your machine they are stored in an encrypted database. The encryption of this offline database is keyed to the instance of the installation of the IRM Desktop and the Windows user account. Why? Well what you do not want to happen is for someone to get their rights for content and then copy these files across hundreds of other machines, therefore getting access to sensitive content across many environments. The IRM server has a setting which controls how many times you can cache these rights on unique machines. This is because people typically access IRM content on more than one computer. Their work desktop, a laptop and often a home computer. So Oracle IRM allows for the usability of caching rights on more than one computer whilst retaining strong security over this cache. So what happens if these files are corrupted in someway? That's when you will see the error, Unable to Connect to Offline database. The most common instance of seeing this is when you are using virtual machines and copy them from one computer to the next. The virtual machine software, VMWare Workstation for example, makes changes to the unique information of that virtual machine and as such invalidates the offline database. How do you solve the problem? Resolution is however simple. You just delete all of the offline database files on the machine and they will be recreated with working encryption when the Oracle IRM Desktop next starts. However this does mean that the IRM server will think you have your rights cached to more than one computer and you will need to rerequest your rights, even though you are only going to be accessing them on one. Because it still thinks the old cache is valid. So be aware, it is good practice to increase the server limit from the default of 1 to say 3 or 4. This is done using the Enterprise Manager instance of IRM. So to delete these offline files I have a simple .bat file you can use; Download DeleteOfflineDBs.bat Note that this uses pskillto stop the irmBackground.exe from running. This is part of the IRM Desktop and holds open a lock to the offline database. Either kill this from task manager or use pskillas part of the script.

    Read the article

  • Letöltheto az Oracle Database Firewall 5.0

    - by Lajos Sárecz
    2010 május 20-án jelentettük be, hogy megvettük az adatbázis tuzfal megoldást fejleszto Secerno céget. Azóta viszonylag keveset lehetett hallani errol a termékrol, idehaza egyedül az oszi ITBN konferencián tartott róla eloadást Stuart Sharp szuk fél órában. Ráadásul a felvásárlás óta a terméket sem lehetett megvásárolni, hiszen a merge után folyó fejlesztések még nem voltak készen. Január 11. óta azonban letötlheto az Oracle Database Firewall 5.0 telepítoje az Oracle edelivery oldaláról az Oracle Database Product Pack-en belül Linux x86 platformra. A Database Firewall az adatbázis védelem elso vonalának tekintheto. Valós idoben monitorozza az adatbázis aktivitását a hálózaton. SQL nyelvi elemzojével rendkívül pontosan képes detektálni a külso és belso támadásokat, a jogosultatlanul, támadó szándékkal végrehajtott tranzakciókat. Az SQL nyelvi elemzojének kifinomultsága lehetové teszi a szurés közel 100%-os pontosságát és megbízhatóságát, ami azért rendkívül fontos, mert nem elég minden támadó tranzakciót kiszurni, de fontos hogy a normál üzletmenetnek megfelelo tranzakciók közül egyet se szurjön, hiszen az is komoly üzleti károkat okozhat. Az adatbázis tuzfalról több részletet tudhat meg mindenki, aki regisztrál és ellátogat a január 27-i Oracle Security Summit rendezvényünkre, ahol a tervek szerint ismét Stuart Sharp tart majd eloadást, viszont ezúttal 1 órában sokkal több részletet tud megosztani a magyar ügyfelekkel és partnerekkel. A Database Firewall eloadást megelozoen egyébként én tartok egy kb. félórás áttekintést az Oracle Database biztonsági megoldásairól.

    Read the article

  • Designing a social network with CQRS, graph databases and relational databases in mind

    - by Siraj Mansour
    I have done quite an amount of research on the topic so far, but i couldn't come up with a conclusion to make up my mind. I am designing a social network and during my research i stumbled upon graph databases, i found neo4j pretty interesting for user relations and traversing through nodes. I also thought of using a relational database such as MS-SQL or MySQL to store entity data only and depending on neo4j for connections between entities. Of course this means more work in my application to store and pull data in and out of 2 different sources. My first question : Is using this approach (graph + relational) a good approach for designing my social network keeping in mind that users on social networks don't have to in synch with real data by split second ? What are the positives and negatives of this approach ? My Second question : I've been doing some reading on CQRS and as i understood it is mostly useful for collaborative environments, and environments where users see a lot of "stale" data. social networks has shared comments, events, etc .. and many users query or update the same data. Could CQRS be a helpful approach ? Would it give any performance/scalability benefits or non-useful complexity ? Is it fairly applicable with my possible choice of (graph + relational) databases approach mentioned in the question above ? My purpose is to know if the approaches i have mentioned above seem good enough for the business context.

    Read the article

  • Passing data from one database to another database table (Access) (C#)

    - by SAMIR BHOGAYTA
    string conString = "Provider=Microsoft.Jet.OLEDB.4.0 ;Data Source=Backup.mdb;Jet OLEDB:Database Password=12345"; OleDbConnection dbconn = new OleDbConnection(); OleDbDataAdapter dAdapter = new OleDbDataAdapter(); OleDbCommand dbcommand = new OleDbCommand(); try { if (dbconn.State == ConnectionState.Closed) dbconn.Open(); string selQuery = "INSERT INTO [Master] SELECT * FROM [MS Access;DATABASE="+ "\\Data.mdb" + ";].[Master]"; dbcommand.CommandText = selQuery; dbcommand.CommandType = CommandType.Text; dbcommand.Connection = dbconn; int result = dbcommand.ExecuteNonQuery(); } catch(Exception ex) {}

    Read the article

  • MAA a Database Machine-nel, maximális rendelkezésre állás

    - by Fekete Zoltán
    Néhány napja jelent meg egy, a maximális rendelkezésre állást boncolgató Oracle fehérpapír :): Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine. Ez a dokumentum az Exadata környezetben az Oracle Data Guard használatát elemzi. Az utolsó oldalakon néhány rendkívül hasznos linket is találunk. Mire is használható a Data Guard? - katasztrófa helyzet kezelése - adatbázis gördülo upgrade - egy megoldás az Exadata környezetre migrálásra - a standby adatbázis kihasználása A Sun Oracle Database Machine háromféle konfigurációban kapható: Full Rack, Half Rack és Quarter Rack, azaz teljes, fél és negyed szekrény kiépítésben. Felfelé upgrade-elheto és akár sok Full Rack összekapcsolva is egyetlen gépként tud muködni. A határ tehát a csillagos ég! :) Hiszen a nap a legfontosabb csillagunk. A Database Machine már önmagában is magas rendelkezésreállást biztosít, hiszen minden - a muködés szempontjából fontos - minden komponense legalább duplikált! Természetesen ez az adatokra is vonatkozik. A Database Machine ideális gyors környezet mind OLTP, mind DW futtatására, mind adatbázis konszolidációra. A tranzakciós (OLTP) rendszereknél régóta fontos követelmény, hogy az elsodleges site mögött legyen egy katasztrófa site, mely át tudja venni az adatbázis-kezelés feladatát, ha árvíz, tuz, vagy más szomorú katasztrófa történne az elsodleges site-on. Manapság már az adattárházak (DW) üzemeltetésében is fontos szerepet kap az MAA architektúra, azaz a Maximum Availability Architecture. Innen letöltheto a pdf: Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine.

    Read the article

  • Minimal set of critical database operations

    - by Juan Carlos Coto
    In designing the data layer code for an application, I'm trying to determine if there is a minimal set of database operations (both single and combined) that are essential for proper application function (i.e. the database is left in an expected state after every data access call). Is there a way to determine the minimal set of database operations (functions, transactions, etc.) that are critical for an application to function correctly? How do I find it? Thanks very much!

    Read the article

  • How do I Integrate Production Database Hot Fixes into Shared Database Development model?

    - by TetonSig
    We are using SQL Source Control 3, SQL Compare, SQL Data Compare from RedGate, Mercurial repositories, TeamCity and a set of 4 environments including production. I am working on getting us to a dedicated environment per developer, but for at least the next 6 months we are stuck with a shared model. To summarize our current system, we have a DEV SQL server where developers first make changes/additions. They commit their changes through SQL Source Control to a local hgdev repository. When they execute an hg push to the main repository, TeamCity listens for that and then (among other things) pushes hgdev repository to hgrc. Another TeamCity process listens for that and does a pull from hgrc and deploys the latest to a QA SQL Server where regression and integration tests are run. When those are passed a push from hgrc to hgprod occurs. We do a compare of hgprod to our PREPROD SQL Server and generate deployment/rollback scripts for our production release. Separate from the above we have database Hot Fixes that will need to be applied in between releases. The process there is for our Operations team make changes on the PreProd database, and then after testing, to use SQL Source Control to commit their hot fix changes to hgprod from the PREPROD database, and then do a compare from hgprod to PRODUCTION, create deployment scripts and run them on PRODUCTION. If we were in a dedicated database per developer model, we could simply automatically push hgprod back to hgdev and merge in the hot fix change (through TeamCity monitoring for hgprod checkins) and then developers would pick it up and merge it to their local repository and database periodically. However, given that with a shared model the DEV database itself is the source of all changes, this won't work. Pushing hotfixes back to hgdev will show up in SQL Source Control as being different than DEV SQL Server and therefore we need to overwrite the reposistory with the "change" from the DEV SQL Server. My only workaround so far is to just have OPS assign a developer the hotfix ticket with a script attached and then we run their hotfixes against DEV ourselves to merge them back in. I'm not happy with that solution. Other than working faster to get to dedicated environment, are they other ways to keep this loop going automatically?

    Read the article

  • Oracle Database 11g Underground Advice for Database Administrators, by April C. Sims

    - by alejandro.vargas
    Recently I received a request to review the book "Oracle Database 11g Underground Advice for Database Administrators" by April C. Sims I was happy to have the opportunity know some details about the author, she is an active contributor to the Oracle DBA community, through her blog "Oracle High Availability" . The book is a serious and interesting work, I think it provides a good study and reference guide for DBA's that want to understand and implement highly available environments. She starts walking over the more general aspects and skills required by a DBA and then goes on explaining the steps required to implement Data Guard, using RMAN, upgrading to 11g, etc.

    Read the article

  • dependency analysis from C# code thru to database tables/columns

    - by fpdave
    I'm looking for a tool to do system wide dependency analysis in C# code and SQL-Server databases. Its looking like the only tool available that does this might be CAST (cast software), which is expensive and it does lots more besides that I dont really need. c# code thru to database column dependency would be hugely useful for many reasons, including: - determining effects of database changes throughout the system - seeing hot spots in the database schema - finding dead stored procedures/tables/etc - understanding the existing code base does anyone know of any such tools?

    Read the article

  • Database Documentation - Lands of Trolls: Why and How?

    When database documentation is mentioned in an IT Department, everybody nods wisely, yet everyone does their best to avoid doing it. Attention to the database documentation can be the best invertment in time a development group can make. It is essential, and no system can be properly maintained without it. Feodor gives a sensible explanation and guideline for the unloved task of creating database documentation.

    Read the article

  • Oracle Database 11g Helps Control Exponential Data Growth

    - by [email protected]
    The 2010 ESG annual customer survey is now available. As part of it, ESG interviewed 300 customers about their IT priorities and, unsurprisingly, "Manage Data Growth" is top of the list. Perhaps less self-evident is the proposed solution to target this prime concern: "Often overlooked because it is a database platform, Oracle Database 11g offers additional capabilities such as automatic storage management (ASM), advanced data compression, and data protection that make managing data growth much easier for organizations of any size." The paper goes on to discuss these capabilities and highlights their potential benefits. Oracle Database 11g Helps Control Exponential Database Growth - a worthwhile read for anyone having to deal with rapidly increasing amounts of data. Download your free copy here.

    Read the article

  • Announcing Oracle Audit Vault and Database Firewall

    - by Troy Kitch
    Today, Oracle announced the new Oracle Audit Vault and Database Firewall product, which unifies database activity monitoring and audit data analysis in one solution. This new product expands protection beyond Oracle and third party databases with support for auditing the operating system, directories and custom sources. Here are some of the key features of Oracle Audit Vault and Database Firewall: Single Administrator Console Default Reports Out-of-the-Box Compliance Reporting Report with Data from Multiple Source Types Audit Stored Procedure Calls - Not Visible on the Network Extensive Audit Details Blocking SQL Injection Attacks Powerful Alerting Filter Conditions To learn more about the new features in Oracle Audit Vault and Database Firewall, watch the on-demand webcast.

    Read the article

  • Stairway to Database Source Control Level 2: Getting a Database into Source Control

    In this level, we're going to continue the philosophy of learning by example, and get a database into our SVN repository. We will also consider our overall approach to source control for databases, and the manner in which our team will develop these databases, concurrently. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

  • Join the Webcast on September 18 to Learn Benefits of Upgrading to Oracle Database 11g

    - by Cinzia Mascanzoni
    Attend the Webcast on Tuesday September 18, 2012, at 10 a.m. PT to learn the "Three Compelling Reasons to Upgrade to Oracle Database 11g." During the live Webcast, Oracle experts will explain how customers who are still working with Oracle Database 10g or an even older version can gain the business, operational, and technical benefits provided by Oracle Database 11g. If you cannot participate in the live event, a replay will be available on the same registration page shortly afterward.

    Read the article

  • Inserting data to database from Android

    - by Angel
    I have to build an application where the requirement is that my clients will send data from their Android device and I have to save that data to a database. I have done the part of coding that inserts data from Android emulator to my XAMPP database on localhost, now I have to implement the real thing. How can I connect the devices where my application will be installed to the XAMPP database I have created so that the data they send can be inserted into it?

    Read the article

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • The Future of the Database Begins

    - by Thanos Terentes Printzios
    For more than three-and-a-half decades, Oracle has defined database innovation. With our leading technologies, Oracle customers have been able to out-think and out-perform their competition. Soon organizations will be able to do that even faster.With the introduction of the Oracle Database In-Memory Option it will be possible to perform TRUE real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine your sales team being able to know the total sales they have made as of right now -- not last week, or even last night, but right now.Imagine innovation that accelerates business decision making to real-time speeds. That's the power of Oracle Database In-Memory.Watch Larry Ellison to find out what this and the other new features of Oracle Database 12c will do for you. Register Now for the Live Webcast

    Read the article

  • Building a database class in PHP

    - by Sprottenwels
    I wonder if I should write a database class for my application, and if so, how to accomplish it? Over there on SO, a guy mentioned it should be written as an abstract class. However, I can't understand why this would be a benefit. Do I understand correctly, that if I would write an abstract class, every other class that methods will need a database connection, could simply extend this abstract class and have it's own database object? If so, how is this different from a "normal" class where I could instantiate an database object? Another method would be to completely forget about my own class and to instantiate a mysqli object on demand. What do you recommend?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >