Search Results

Search found 31120 results on 1245 pages for 'database connectivity'.

Page 25/1245 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Is it possible to detect that a database connection is to a copy rather than to the original database?

    - by user149238
    I have an application that needs to know if it is connected to the original database that it was installed with or if the connection is to a copy of that database. Is there any known method to know if the database has been cloned and the application is no longer connected to the original? I am specifically interested in MS SQL Server and Oracle. I was kicking ideas around for a stored procedure but that most likely doesn't have access to the hardware to confirm unique hardware information that would somewhat guarantee that the database is the one that it was originally connected to during installation. I'm trying to prevent/detect cloning of a database so that there is only 1 "true location of truth". Thanks!

    Read the article

  • Discount Codes Galore

    - by Cassandra Clark
    Saving money is at the top of everyones list right now. With this in mind the Oracle Technology Network team has compiled a list of discounts available at the Oracle Store. We are also introducing an Oracle Technology Network member discount from O'Reilly Media. If you subscribe to any of the Oracle Technology newsletters you also saw special discounts from CRC Press, Packt Publishing and Apress. We are going to do our best to bring you more offers like this every month. Now on to the discounts... Oracle Store offers - all below expiring May 31st 2010. Don't miss out! Expand Your Productivity with Oracle Open Office and Save 15%? Enter OTNOffice at checkout. Buy Now! Drive Business Agility and Performance with Industry-leading Oracle Database Management Packs.  Save 10% when you purchase them at the Oracle Store. Enter OTNDBMP at checkout. Buy Now! 15% Savings on Oracle Virtualization and Unbreakable Linux Support at the Oracle Store Enter code OTNLinuxVM at checkout. Buy Now! 20% Savings on Oracle SQL Developer Data Modeler Use OTNSQL at checkout. Buy Now! O'Reilly Oracle Technology Network Member Offer O'Reilly is generously offering Oracle Technology Network Members 35% off for print books and 40% off of eBooks. Browse Oracle titles at- http://oreilly.com/pub/topic/oracle. Use discount code TECNT at checkout.

    Read the article

  • Database Connectivity Test with UDL File

    - by Ben Griswold
    I bounced around between projects a lot last week.  What each project had in common was the need to validate at least one SQL connection.  Whether you have SQL tools like SSMS installed or not, this is a very easy task if you are aware of the UDL (Universal Data Link) files.  Create a new file and name it anything as long as it has the .udl extension. Open the file, choose a provider: Click Next >> or navigate to the Connection Tab to provide connection information.  Once you provide server and login credentials, the database list will populate.  At this point, you know the connection is valid. but go ahead and click the Test Connection button anyway. On the final tab, you can provide extra connection information like Application Name which can come in handy.  The All tab is beneficial if you want to build a valid connection string to include in your own applications.  If you save the file and then open in Notepad, you’ll find that said connection string: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=(local);Application Name=TestApp I hope this tip helps save you some time.  How do you test if you don’t have SSMS installed?

    Read the article

  • DB Schema for ACL involving 3 subdomains

    - by blacktie24
    Hi, I am trying to design a database schema for a web app which has 3 subdomains: a) internal employees b) clients c) contractors. The users will be able to communicate with each other to some degree, and there may be some resources that overlap between them. Any thoughts about this schema? Really appreciate your time and thoughts on this. Cheers! -- -- Table structure for table locations CREATE TABLE IF NOT EXISTS locations ( id bigint(20) NOT NULL, name varchar(250) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Table structure for table privileges CREATE TABLE IF NOT EXISTS privileges ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, resource_id int(11) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=10 ; -- -- Table structure for table resources CREATE TABLE IF NOT EXISTS resources ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, user_type enum('internal','client','expert') NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Table structure for table roles CREATE TABLE IF NOT EXISTS roles ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, type enum('position','department') NOT NULL, parent_id int(11) DEFAULT NULL, user_type enum('internal','client','expert') NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Table structure for table role_perms CREATE TABLE IF NOT EXISTS role_perms ( id int(11) NOT NULL AUTO_INCREMENT, role_id int(11) NOT NULL, privilege_id int(11) NOT NULL, mode varchar(250) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Table structure for table users CREATE TABLE IF NOT EXISTS users ( id int(10) unsigned NOT NULL AUTO_INCREMENT, email varchar(255) NOT NULL, password varchar(255) NOT NULL, salt varchar(255) NOT NULL, type enum('internal','client','expert') NOT NULL, first_name varchar(255) NOT NULL, last_name varchar(255) NOT NULL, location_id int(11) NOT NULL, phone varchar(255) NOT NULL, status enum('active','inactive') NOT NULL DEFAULT 'active', PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Table structure for table user_perms CREATE TABLE IF NOT EXISTS user_perms ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, privilege_id int(11) NOT NULL, mode varchar(250) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Table structure for table user_roles CREATE TABLE IF NOT EXISTS user_roles ( id int(11) NOT NULL, user_id int(11) NOT NULL, role_id int(11) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

    Read the article

  • Oracle Database 12 c Training and Certification: What’s in it for Me?

    - by KJones
    Oracle Database 12c has officially launched! Through Oracle University, our expert instructors can introduce you to the features and functions of this new Oracle Database 12c product. Through training courses and certification exam prep seminars, you can build up your database knowledge and apply this knowledge to advance your career. Already an Oracle Database Expert? Why Oracle Database 12c Training is still a Good Idea Oracle is the industry leader for database technology and takes the release of new products very seriously. We continue to listen to customer needs and add features and functionality to address those needs. Oracle Database 12c is no exception. The following areas have been greatly enhanced and should be considered for your additional training or certification: • Database for Cloud Computing • Compression and Information Lifecycle Management (ILM) • Improved Performance & Scalability • Extreme Availability • Security Defense in Depth • Manageability Oracle Certified Database Administrators Reap Career Rewards Becoming an expert user of database technology through Oracle University's certification program widens your skill set to demonstrate your expertise implementing the most advanced database technology available. By doing so, you'll make yourself a more marketable employee and candidate in the job market.  Reasons to Become an Oracle Certified Database Administrator of Oracle Database 12c: • The new Oracle Database 12c certifications emphasize more advanced skills that align with industry standards, resulting in an even more valuable credential for customers and partners. • The Oracle Certified Associate (OCA) for Oracle Database 12c centers upon certification objectives that measure IT professionals' day-to-day skills, along with your ability to manage challenges. • Building upon all of the competencies incorporated into Oracle's Database 12c OCA certification, the Oracle Certified Professional (OCP) for Oracle Database 12c certification includes advanced knowledge and skills required of top-performing database administrators. • The Oracle Certified Master (OCM) for Oracle Database 12c - a very challenging and elite top-level certification - certifies the most highly skilled and experienced database experts. • Oracle offers 12c upgrade paths for existing Oracle Certified Professionals (OCP) and Oracle Certified Masters (OCM). Database 12c Training and Certification: Built with Your Input When creating Oracle Database 12c training courses and certifications, we wanted to know which tasks are most important in a DBA's day-to-day work. Instead of assuming what those tasks might be, we decided to develop a job task analysis survey for DBAs. The response rate from DBAs from around the world was overwhelming! The survey focused on the following key database areas: • DBA Core Essentials • Database Storage • High Availability • Scalability • Networking • Security • Very Large Database Administration • Distributed Databases After conducting this survey, we took this specific feedback and used it to help mold the new Oracle Database 12c training and certification curriculum. The benefit to you? You now have access to Oracle Database 12c courses and certification exams that were created with your specific on-the-job tasks in mind. Explore Oracle Database 12c Training & Certification Today Investing in Oracle Database 12c training courses and certifications will help you develop a great deal of knowledge, experience and expertise. Explore our portfolio of offerings to determine which skills you need as a DBA to get up-to-speed on Oracle Database 12c technology. Questions or comments about the new Oracle Database 12c offerings? Let us know in the comments below. - Diana Gray, Principle Curriculum Product Manager and Raza Siddiqui, Senior Principle Curriculum Product Manager

    Read the article

  • updating changes from one database to another database in the same server

    - by Pavan Kumar
    I have a copy of client database say 'DBCopy' which already contains modified data. The copy of the client database (DBCopy) is attached to the SQL Server where the Central Database (DBCentral) exists. Then I want to update whatever changes already present in DBCopy to DBCentral. Both DBCopy and DBCentral have same schema. How can i do it programatically using C#.NET maybe with a button click. Can you give me an example code as how to do it?. I am using SQL Server 2005 Standard Edition and VS 2008 SP1. In the actual scenario there are about 7 client database all with same schema as the central database. I am bringing copy of each client database and attach it to Central Server where the central database resides and try to update changes present in each copy of the client database to central database one by one programatically using C# .NET . The clients and the central server are physically seperate machines present in different places. They are not interconnected. I need to only update and insert new data. I am not bothered about deletion of data. Thanks and regards Pavan

    Read the article

  • SQL SERVER – Move Database Files MDF and LDF to Another Location

    - by pinaldave
    When a novice DBA or Developer create a database they use SQL Server Management Studio to create new database. Additionally, the T-SQL script to create a database is very easy as well. You can just write CREATE DATABASE DatabaseName and it will create new database for you. The point to remember here is that it will create the database at the default location specified for SQL Server Instance (this default instance can be changed and we will see that in future blog posts). Now, once the database goes in production it will start to grow. It is not common to keep the Database on the same location where OS is installed. Usually Database files are on SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. Let us assume we have two folders loc1 and loc2. We want to move database files from loc1 to loc2. USE MASTER; GO -- Take database in single user mode -- if you are facing errors -- This may terminate your active transactions for database ALTER DATABASE TestDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO -- Detach DB EXEC MASTER.dbo.sp_detach_db @dbname = N'TestDB' GO Now move the files from loc1 to loc2. You can now reattach the files with new locations. -- Move MDF File from Loc1 to Loc 2 -- Re-Attached DB CREATE DATABASE [TestDB] ON ( FILENAME = N'F:\loc2\TestDB.mdf' ), ( FILENAME = N'F:\loc2\TestDB_log.ldf' ) FOR ATTACH GO Well, we are done. There is little warning here for you: If you do ROLLBACK IMMEDIATE you may terminate your active transactions so do not use it randomly. Do it if you are confident that they are not needed or due to any reason there is a connection to the database which you are not able to kill manually after review. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Database Snapshot in Sql Server 2005

    A database snapshot is a read-only, static view of a database (called the source database). Each database snapshot is transactionally consistent with the source database at the moment of the snapshot's creation. When you create a database snapshot, the source database will typically have open transactions. Before the snapshot becomes available, the open transactions are rolled back to make the database snapshot transactionally consistent.

    Read the article

  • Firebird 2.1: gfix -online returns "database shutdown"

    - by darvids0n
    Hey all. Googling this one hasn't made a bit of difference, unfortunately, as most results specify the syntax for onlining a database after using gfix -shut -force 30 (or any other number of seconds) as gfix -online dbname, and I have run gfix -online dbname with and without login credentials for the DB in question. The message that I get is: database dbname shutdown Which is fine, except that I want to bring it online now. It's out of the question to close fbserver.exe (running on a Windows box, afaik it's Classic Server 2.1.1 but it may be Super) since we have other databases running off of that which need almost 24/7 uptime. The message from doing another gfix -shut -force or -attach or -tran is invalid shutdown mode for dbname which appears to match with the documentation of what happens if the database is already fully shut down. Ideas and input greatly appreciated, especially since at the moment time is a factor for me. Thanks! EDIT: The whole reason I shut down the DB is to clear out "active" transactions which were linked to a specific IP address, and that computer is my dev terminal (actually a virtual machine where I develop frontends for the database software) but I had no processes connecting to the database at the time. They looked like orphaned transactions to me, and they weren't in limbo afaik. Running a manual sweep didn't clear them out, deleting the rows from MON$STATEMENTS didn't work even though Firebird 2.1 supposedly supports cancelling queries that way. My last resort was to "restart" the database, hence the above issue.

    Read the article

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

  • Is Oracle Database Appliance (ODA) A Best Kept Secret?

    - by Ravi.Sharma
    There is something about Oracle Database Appliance that underscores the tremendous value customers see in the product. Repeat purchases. When you buy “one” of something and come back to buy another, it confirms that the product met your expectations, you found good value in it, and perhaps you will continue to use it. But when you buy “one” and come back to buy many more on your very next purchase, it tells something else. It tells that you truly believe that you have found the best value out there. That you are convinced! That you are sold on the great idea and have discovered a product that far exceeds your expectations and delivers tremendous value! Many Oracle Database Appliance customers are such larger-volume-repeat-buyers. It is no surprise, that the product has a deeper penetration in many accounts where a customer made an initial purchase. The value proposition of Oracle Database Appliance is undeniably strong and extremely compelling. This is especially true for customers who are simply upgrading or “refreshing” their hardware (and reusing software licenses). For them, the ability to acquire world class, highly available database hardware along with leading edge management software and all of the automation is absolutely a steal. One customer DBA recently said, “Oracle Database Appliance is the best investment our company has ever made”. Such extreme statements do not come out of thin air. You have to experience it to believe it. Oracle Database Appliance is a low cost product. Not many sales managers may be knocking on your doors to sell it. But the great value it delivers to small and mid-size businesses and database implementations should not be underestimated. 

    Read the article

  • Should one use a separate database for application data and user data?

    - by trycatch
    I’ve been working on a project for a little while and I’m unsure which is the better architecture. I’m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. The TL;DR is: how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? The detailed version is.. if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn’t feel right, and I can't pick out quite why.

    Read the article

  • SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database

    - by Pinal Dave
    This is the third post in the series of the blog posts I am writing about NuoDB. NuoDB is very innovative and easy-to-use product. I can clearly see how one can scale-out NuoDB with so much ease and confidence. In my very first blog post we discussed how we can install NuoDB (link), and in my second post I discussed how we can manage the NuoDB database transaction engines and storage managers with a few clicks (link). Note: You can Download NuoDB from here. In this post, we will learn how we can use the Explorer feature of NuoDB to do various SQL operations. NuoDB has a browser-based Explorer, which is very powerful and has many of the features any IDE would normally have. Let us see how it works in the following step-by-step tutorial. Let us go to the NuoDBNuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the QuickStart screen. Make sure that you have created the sample database. If you have not created sample database, click on Create Database and create it successfully. Now go to the NuoDB Explorer by clicking on the main tab, and it will ask you for your domain username and password. Enter the username as a domain and password as a bird. Alternatively you can also enter username as a quickstart and password as a quickstart. Once you enter the password you will be able to see the databases. In our example we have installed the Sample Database hence you will see the Test database in our Database Hierarchy screen. When you click on database it will ask for the database login. Note that Database Login is different from Domain login and you will have to enter your database login over here. In our case the database username is dba and password is goalie. Once you enter a valid username and password it will display your database. Further expand your database and you will notice various objects in your database. Once you explore various objects, select any database and click on Open. When you click on execute, it will display the SQL script to select the data from the table. The autogenerated script displays entire result set from the database. The NuoDB Explorer is very powerful and makes the life of developers very easy. If you click on List SQL Statements it will list all the available SQL statements right away in Query Editor. You can see the popup window in following image. Here is the cool thing for geeks. You can even click on Query Plan and it will display the text based query plan as well. In case of a SELECT, the query plan will be much simpler, however, when we write complex queries it will be very interesting. We can use the query plan tab for performance tuning of the database. Here is another feature, when we click on List Tables in NuoDB Explorer.  It lists all the available tables in the query editor. This is very helpful when we are writing a long complex query. Here is a relatively complex example I have built using Inner Join syntax. Right below I have displayed the Query Plan. The query plan displays all the little details related to the query. Well, we just wrote multi-table query and executed it against the NuoDB database. You can use the NuoDB Admin section and do various analyses of the query and its performance. NuoDB is a distributed database built on a patented emergent architecture with full support for SQL and ACID guarantees.  It allows you to add Transaction Engine processes to a running system to improve the performance of your system.  You can also add a second Storage Engine to your running system for redundancy purposes.  Conversely, you can shut down processes when you don’t need the extra database resources. NuoDB also provides developers and administrators with a single intuitive interface for centrally monitoring deployments. If you have read my blog posts and have not tried out NuoDB, I strongly suggest that you download it today and catch up with the learnings with me. Trust me though the product is very powerful, it is extremely easy to learn and use. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Circular references in TFS Database Edition

    - by Jaco Pretorius
    I'm using TFS Database Edition to script a number of databases. Many of the databases have references between them - for example, view in database A might do select ... from B..TableX This works fine as long as database B is also a project in the solution. The problem comes in when I have objects in database A referencing database B and database B referencing objects in database A. It seems like Visual Studio needs to build the projects in order which is obviously not possible in this case. How do you deal with circular references between database projects in TFS database edition?

    Read the article

  • How to change hardcoded database paths in a Lotus Approach Database

    - by asoltys
    One of our Lotus Approach files (file extension .apr) has hardcoded paths to additional underlying database files (file extension .dbf). We've moved the Approach database and all the underlying files to a new network drive, so these paths need to be updated. You can see the old paths by opening the File menu and selecting "Approach File Properties", under the "Databases" heading. How can you change these paths?

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • What's a good pure-python, document-based and flat-file database engine?

    - by joe Simpson
    Hi, for development i'd love to have a flat file database with the requirements up in the title, but I don't seem to be able to find a database with these requirements. I can't seem to get MetaKit to work. I only need it to work on the development machine, but in the real world my product will have more data and needs more room and will need something better. Does anyone know of a database engine capable of this or do I need to just use python's pickle and load and save a file? Joe

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Database Vault integration available

    - by Anthony Shorten
    One of the major features of Oracle Utilities Application Framework V4.1 is the provision of a base solution for integration to the Database Vault product. Database Vault is part of Oracle’s security portfolio of product and allows database user permissions to be locked down to only allow appropriate users appropriate access to the product data. By default, when you install the product database, administrators and SYSDBA users have full DML (SELECT, INSERT, UPDATE and DELETE access) to the schemas they own and in the case of the SYSDBA users, all schemas on the database. This can be perceived as an issue. Database Vault allows an additional layer of security to disable inappropriate access. In Oracle Utilities Application Framework, a prebuilt Database Vault solution has been provided to provide base DML access to product data for product users only. The solution is shipped with the database installation files and includes a set of SQL files to create, disable, enable and delete the Database Vault objects. The solution contains a Database Vault Realm, RuleSets, Rules and Command Rules that can be used as is or extended to meet site specific needs. The solution is consistent with other Database Vault solutions provided for other Oracle applications such as PeopleSoft, E-Business Suite, JD-Edwards and Siebel. Customers familiar with the database vault solutions for those products will recognize the similarities between the solutions. For more details of the solution, refer to the Database Vault Integration for Oracle Utilities Application Framework Based Products on My Oracle Support at KB Id: 1290700.1.

    Read the article

  • Database sharing/versioning

    - by DarkJaff
    Hi everyone, I have a question but I'm not sure of the word to use. My problem: I have an application using a database to stock information. The database can ben in access (local) or in a server (SQL Server or Oracle). We support these 3 kind of database. We want to give the possibility to the user to do what I think we can call versioning. Let me explain : We have a database 1. This is the master. We want to be able to create a database 2 that will be the same thing as database 1 but we can give it to someone else. They each work on each other side, adding, modifying and deleting records on this very complex database. After that, we want the database 1 to include the change from database 2, but with the possibility to dismiss some of the change. For you information, ou application is already multiuser so why don't we just use this multi-user and forget about this versionning? It's because sometimes, we need to give a copy of the database to another company on another site and they can't connect on our server. They work on their side and then, we want to merge. Is there anyone here with experience with this type of requirement? We have a lot of ideas but most of them require a LOT of work, massive modification to the database or to the existing queries. This is a 2 millions and growing C++ app, so rewriting it is not possible! Thanks for any ideas that you may give us! J-F

    Read the article

  • Android - BroadcastReceiver CONNECTIVITY ACTION

    - by Marc Ortiz
    in my project there's a service that listens for changes in the connectivity. When wifi is switched off to on then it gets called. The problem it's that i'm using a fragment and inside a fragment there's a button with the setonclicklistener(); and onclick(); SOMETIMES when i touch the button then the service receives an intent that the connectivity has changed (the method gets called without any reason...). Here's the code of my fragment activity for the viewpager layout: public static class FragmentSelection extends Fragment { int mNum; static FragmentSelection newInstance(int num) { FragmentSelection f = new FragmentSelection(); // Supply num input as an argument. Bundle args = new Bundle(); args.putInt("num", num); f.setArguments(args); return f; } /** * When creating, retrieve this instance's number from its arguments. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mNum = (getArguments() != null ? getArguments().getInt("num") : 1) + 1; } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { final View v = inflater.inflate(R.layout.fragment_intro_contenido, container, false); Button btStart = (Button) v.findViewById(R.id.btChanged); final CheckBox cbSMS = (CheckBox) v.findViewById(R.id.checkBoxSms); final CheckBox cbCalls = (CheckBox) v .findViewById(R.id.checkBoxLlamadas); final CheckBox cbApps = (CheckBox) v .findViewById(R.id.checkBoxApps); final CheckBox cbPosition = (CheckBox) v .findViewById(R.id.checkBoxPosicion); final CheckBox cbContacts = (CheckBox) v .findViewById(R.id.checkBoxContactos); final CheckBox cbAll = (CheckBox) v.findViewById(R.id.checkBoxAll); cbSMS.setChecked(true); cbCalls.setChecked(true); cbApps.setChecked(true); cbPosition.setChecked(true); cbContacts.setChecked(true); cbAll.setChecked(true); btStart.setOnClickListener(new View.OnClickListener() { public void onClick(View c) { Intent i = new Intent(); i.setClass(v.getContext(), AnonymeActivity.class); if (!cbSMS.isChecked()) { i.putExtra("sms", 0); } else { i.putExtra("sms", 1); } if (!cbCalls.isChecked()) { i.putExtra("calls", 0); } else { i.putExtra("calls", 1); } if (cbContacts.isChecked()) { i.putExtra("contacts", 1); } else { i.putExtra("contacts", 0); } if (cbApps.isChecked()) { i.putExtra("apps", 1); } else { i.putExtra("apps", 0); } if (!cbPosition.isChecked()) { i.putExtra("gps", 0); } else { i.putExtra("gps", 1); } if (!cbAll.isChecked()) { if (cbSMS.isChecked() && cbCalls.isChecked() && cbContacts.isChecked() && cbApps.isChecked() && cbPosition.isChecked()) { i.putExtra("all", 1); } else { i.putExtra("all", 0); } } else { i.putExtra("all", 1); } startActivity(i); } }); return v; } @Override public void onActivityCreated(Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); } } My BroadcastReceiver class: class Broadcast_Reciver extends BroadcastReceiver implements Variables { CheckConexion cc; @Override public void onReceive(Context contxt, Intent intent) { // Cuando hay un evento, lo diferenciamos y hacemos una acción. if (intent.getAction().equals(SMS_RECEIVED)) { Sms sms = new Sms(null, contxt); sms.uploadNewSms(intent); } else if (intent.getAction().equals(Intent.ACTION_BATTERY_LOW)) { // st.batterylow(contxt); } else if (intent.getAction().equals(Intent.ACTION_POWER_CONNECTED)) { // st.power(1, contxt); } else if (intent.getAction().equals(Intent.ACTION_POWER_DISCONNECTED)) { // st.power(0, contxt); } else if (intent.getAction().equals(Intent.ACTION_CALL_BUTTON)) { // Notify } else if (intent.getAction().equals(Intent.ACTION_CAMERA_BUTTON)) { // Notify } else if (intent.getAction().equals(Intent.ACTION_PACKAGE_ADDED) || intent.getAction().equals(Intent.ACTION_PACKAGE_CHANGED) || intent.getAction().equals(Intent.ACTION_PACKAGE_REMOVED)) { Database db = new Database(contxt); if (db.open().Preferences(4)) { Uri data = intent.getData(); new ListApps(contxt).import_app(intent, contxt, data, intent.getAction()); } db.close(); } else if (intent.getAction().equals( ConnectivityManager.CONNECTIVITY_ACTION)) { cc = new CheckConexion(contxt); if (cc.isOnline()) { Database db = new Database(contxt); if (db.open().move() == 1) { new UploadOffline(contxt); } db.close(); } } } } And the errors: 9-03 23:20:37.887: E/SqliteDatabaseCpp(2715): CREATE TABLE android_metadata failed 09-03 23:20:37.887: E/SQLiteDatabase(2715): Failed to open the database. closing it. 09-03 23:20:37.887: E/SQLiteDatabase(2715): android.database.sqlite.SQLiteDatabaseLockedException: database is locked 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.database.sqlite.SQLiteDatabase.native_setLocale(Native Method) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.database.sqlite.SQLiteDatabase.setLocale_(SQLiteDatabase.java:2211) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.database.sqlite.SQLiteDatabase.setLocale(SQLiteDatabase.java:2199) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:1130) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:1081) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:1167) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.app.ContextImpl.openOrCreateDatabase(ContextImpl.java:833) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.content.ContextWrapper.openOrCreateDatabase(ContextWrapper.java:221) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:157) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at com.background.Database.open(Database.java:127) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at <b><b><b><h3>com.background.Broadcast_Reciver.onReceive(BroadcastService.java:100)</h3> 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.app.LoadedApk$ReceiverDispatcher$Args.run(LoadedApk.java:728) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.os.Handler.handleCallback(Handler.java:605) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.os.Handler.dispatchMessage(Handler.java:92) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.os.Looper.loop(Looper.java:137) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at android.app.ActivityThread.main(ActivityThread.java:4507) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at java.lang.reflect.Method.invokeNative(Native Method) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at java.lang.reflect.Method.invoke(Method.java:511) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:790) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:557) 09-03 23:20:37.887: E/SQLiteDatabase(2715): at dalvik.system.NativeStart.main(Native Method) 09-03 23:20:37.887: D/AndroidRuntime(2715): Shutting down VM 09-03 23:20:37.887: W/dalvikvm(2715): threadid=1: thread exiting with uncaught exception (group=0x40c3b1f8) 09-03 23:20:37.902: E/AndroidRuntime(2715): FATAL EXCEPTION: main 09-03 23:20:37.902: E/AndroidRuntime(2715): java.lang.RuntimeException: Error receiving broadcast Intent { act=android.net.conn.CONNECTIVITY_CHANGE flg=0x10000010 (has extras) } in com.background.Broadcast_Reciver@415203f8 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.app.LoadedApk$ReceiverDispatcher$Args.run(LoadedApk.java:737) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.os.Handler.handleCallback(Handler.java:605) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.os.Handler.dispatchMessage(Handler.java:92) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.os.Looper.loop(Looper.java:137) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.app.ActivityThread.main(ActivityThread.java:4507) 09-03 23:20:37.902: E/AndroidRuntime(2715): at java.lang.reflect.Method.invokeNative(Native Method) 09-03 23:20:37.902: E/AndroidRuntime(2715): at java.lang.reflect.Method.invoke(Method.java:511) 09-03 23:20:37.902: E/AndroidRuntime(2715): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:790) 09-03 23:20:37.902: E/AndroidRuntime(2715): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:557) 09-03 23:20:37.902: E/AndroidRuntime(2715): at dalvik.system.NativeStart.main(Native Method) 09-03 23:20:37.902: E/AndroidRuntime(2715): Caused by: android.database.sqlite.SQLiteDatabaseLockedException: database is locked 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.database.sqlite.SQLiteDatabase.native_setLocale(Native Method) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.database.sqlite.SQLiteDatabase.setLocale_(SQLiteDatabase.java:2211) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.database.sqlite.SQLiteDatabase.setLocale(SQLiteDatabase.java:2199) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:1130) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:1081) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:1167) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.app.ContextImpl.openOrCreateDatabase(ContextImpl.java:833) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.content.ContextWrapper.openOrCreateDatabase(ContextWrapper.java:221) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:157) 09-03 23:20:37.902: E/AndroidRuntime(2715): at com.background.Database.open(Database.java:127) 09-03 23:20:37.902: E/AndroidRuntime(2715): at com.background.Broadcast_Reciver.onReceive(BroadcastService.java:100) 09-03 23:20:37.902: E/AndroidRuntime(2715): at android.app.LoadedApk$ReceiverDispatcher$Args.run(LoadedApk.java:728) 09-03 23:20:37.902: E/AndroidRuntime(2715): ... 9 more Checkout that the program goes into BroadcastReceiver class and i don't understand why!

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • Oracle’s New Memory-Optimized x86 Servers: Getting the Most Out of Oracle Database In-Memory

    - by Josh Rosen, x86 Product Manager-Oracle
    With the launch of Oracle Database In-Memory, it is now possible to perform real-time analytics operations on your business data as it exists at that moment – in the DRAM of the server – and immediately return completely current and consistent data. The Oracle Database In-Memory option dramatically accelerates the performance of analytics queries by storing data in a highly optimized columnar in-memory format.  This is a truly exciting advance in database technology.As Larry Ellison mentioned in his recent webcast about Oracle Database In-Memory, queries run 100 times faster simply by throwing a switch.  But in order to get the most from the Oracle Database In-Memory option, the underlying server must also be memory-optimized. This week Oracle announced new 4-socket and 8-socket x86 servers, the Sun Server X4-4 and Sun Server X4-8, both of which have been designed specifically for Oracle Database In-Memory.  These new servers use the fastest Intel® Xeon® E7 v2 processors and each subsystem has been designed to be the best for Oracle Database, from the memory, I/O and flash technologies right down to the system firmware.Amongst these subsystems, one of the most important aspects we have optimized with the Sun Server X4-4 and Sun Server X4-8 are their memory subsystems.  The new In-Memory option makes it possible to select which parts of the database should be memory optimized.  You can choose to put a single column or table in memory or, if you can, put the whole database in memory.  The more, the better.  With 3 TB and 6 TB total memory capacity on the Sun Server X4-4 and Sun Server X4-8, respectively, you can memory-optimize more, if not your entire database.   Sun Server X4-8 CMOD with 24 DIMM slots per socket (up to 192 DIMM slots per server) But memory capacity is not the only important factor in selecting the best server platform for Oracle Database In-Memory.  As you put more of your database in memory, a critical performance metric known as memory bandwidth comes into play.  The total memory bandwidth for the server will dictate the rate in which data can be stored and retrieved from memory.  In order to achieve real-time analysis of your data using Oracle Database In-Memory, even under heavy load, the server must be able to handle extreme memory workloads.  With that in mind, the Sun Server X4-8 was designed with the maximum possible memory bandwidth, providing over a terabyte per second of total memory bandwidth.  Likewise, the Sun Server X4-4 also provides extreme memory bandwidth in an even more compact form factor with over half a terabyte per second, providing customers with scalability and choice depending on the size of the database.Beyond the memory subsystem, Oracle’s Sun Server X4-4 and Sun Server X4-8 systems provide other key technologies that enable Oracle Database to run at its best.  The Sun Server X4-4 allows for up 4.8 TB of internal, write-optimized PCIe flash while the Sun Server X4-8 allows for up to 6.4 TB of PCIe flash.  This enables dramatic acceleration of data inserts and updates to Oracle Database.  And with the new elastic computing capability of Oracle’s new x86 servers, server performance can be adapted to your specific Oracle Database workload to ensure that every last bit of processing power is utilized.Because Oracle designs and tests its x86 servers specifically for Oracle workloads, we provide the highest possible performance and reliability when running Oracle Database.  To learn more about Sun Server X4-4 and Sun Server X4-8, you can find more details including data sheets and white papers here. Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software.  He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers. 

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >