Search Results

Search found 47408 results on 1897 pages for 'database machine'.

Page 28/1897 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Database OR Array

    - by rezoner
    What is the exact point of using external database system if I have simple relations (95% querries are dependant on ID). I am storing users and their stats. Why would I use external database if I can have neat constructions like: db.users[32] = something Array of 500K users is not that big effort for RAM Pros are: no problematic asynchronity (instant results) easy export/import dealing with database like with a native object LITERALLY ps. and considerations: Would it be faster or slower to do collection[3] than db.query("select ... I am going to store it as a file/s There is only ONE application/process accessing this data, and the code is executed line by line - please don't elaborate about locking. Please don't answer with database propositions but why to use external DB over native array/object - I have experience in a few databases - that's not the case. What I am building is a client/gateway/server(s) game. Gateway deals with all users data, processing, authenticating, writing statistics e.t.c No other part of software needs to access directly to this data/database.

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

  • Publish database between two open database connections (Visual Studio 2005)

    - by danielswe
    I have two data locations, one to a local and one to a remote database. How do I copy the local database schema to the remote? The reason I don't use "Publish to provider" is that I'm not sure that I have all the information necessary to do so. I have the database name, server, username and pass but not "web service address" nor "web service password". I work in Visual Studio 2005. The server is a MSSQL 2005 server. I have tried using the queries but I only get errors doing so.

    Read the article

  • Is it possible to detect that a database connection is to a copy rather than to the original database?

    - by user149238
    I have an application that needs to know if it is connected to the original database that it was installed with or if the connection is to a copy of that database. Is there any known method to know if the database has been cloned and the application is no longer connected to the original? I am specifically interested in MS SQL Server and Oracle. I was kicking ideas around for a stored procedure but that most likely doesn't have access to the hardware to confirm unique hardware information that would somewhat guarantee that the database is the one that it was originally connected to during installation. I'm trying to prevent/detect cloning of a database so that there is only 1 "true location of truth". Thanks!

    Read the article

  • Discount Codes Galore

    - by Cassandra Clark
    Saving money is at the top of everyones list right now. With this in mind the Oracle Technology Network team has compiled a list of discounts available at the Oracle Store. We are also introducing an Oracle Technology Network member discount from O'Reilly Media. If you subscribe to any of the Oracle Technology newsletters you also saw special discounts from CRC Press, Packt Publishing and Apress. We are going to do our best to bring you more offers like this every month. Now on to the discounts... Oracle Store offers - all below expiring May 31st 2010. Don't miss out! Expand Your Productivity with Oracle Open Office and Save 15%? Enter OTNOffice at checkout. Buy Now! Drive Business Agility and Performance with Industry-leading Oracle Database Management Packs.  Save 10% when you purchase them at the Oracle Store. Enter OTNDBMP at checkout. Buy Now! 15% Savings on Oracle Virtualization and Unbreakable Linux Support at the Oracle Store Enter code OTNLinuxVM at checkout. Buy Now! 20% Savings on Oracle SQL Developer Data Modeler Use OTNSQL at checkout. Buy Now! O'Reilly Oracle Technology Network Member Offer O'Reilly is generously offering Oracle Technology Network Members 35% off for print books and 40% off of eBooks. Browse Oracle titles at- http://oreilly.com/pub/topic/oracle. Use discount code TECNT at checkout.

    Read the article

  • DB Schema for ACL involving 3 subdomains

    - by blacktie24
    Hi, I am trying to design a database schema for a web app which has 3 subdomains: a) internal employees b) clients c) contractors. The users will be able to communicate with each other to some degree, and there may be some resources that overlap between them. Any thoughts about this schema? Really appreciate your time and thoughts on this. Cheers! -- -- Table structure for table locations CREATE TABLE IF NOT EXISTS locations ( id bigint(20) NOT NULL, name varchar(250) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Table structure for table privileges CREATE TABLE IF NOT EXISTS privileges ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, resource_id int(11) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=10 ; -- -- Table structure for table resources CREATE TABLE IF NOT EXISTS resources ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, user_type enum('internal','client','expert') NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Table structure for table roles CREATE TABLE IF NOT EXISTS roles ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, type enum('position','department') NOT NULL, parent_id int(11) DEFAULT NULL, user_type enum('internal','client','expert') NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Table structure for table role_perms CREATE TABLE IF NOT EXISTS role_perms ( id int(11) NOT NULL AUTO_INCREMENT, role_id int(11) NOT NULL, privilege_id int(11) NOT NULL, mode varchar(250) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Table structure for table users CREATE TABLE IF NOT EXISTS users ( id int(10) unsigned NOT NULL AUTO_INCREMENT, email varchar(255) NOT NULL, password varchar(255) NOT NULL, salt varchar(255) NOT NULL, type enum('internal','client','expert') NOT NULL, first_name varchar(255) NOT NULL, last_name varchar(255) NOT NULL, location_id int(11) NOT NULL, phone varchar(255) NOT NULL, status enum('active','inactive') NOT NULL DEFAULT 'active', PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Table structure for table user_perms CREATE TABLE IF NOT EXISTS user_perms ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, privilege_id int(11) NOT NULL, mode varchar(250) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Table structure for table user_roles CREATE TABLE IF NOT EXISTS user_roles ( id int(11) NOT NULL, user_id int(11) NOT NULL, role_id int(11) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

    Read the article

  • Oracle Database 12 c Training and Certification: What’s in it for Me?

    - by KJones
    Oracle Database 12c has officially launched! Through Oracle University, our expert instructors can introduce you to the features and functions of this new Oracle Database 12c product. Through training courses and certification exam prep seminars, you can build up your database knowledge and apply this knowledge to advance your career. Already an Oracle Database Expert? Why Oracle Database 12c Training is still a Good Idea Oracle is the industry leader for database technology and takes the release of new products very seriously. We continue to listen to customer needs and add features and functionality to address those needs. Oracle Database 12c is no exception. The following areas have been greatly enhanced and should be considered for your additional training or certification: • Database for Cloud Computing • Compression and Information Lifecycle Management (ILM) • Improved Performance & Scalability • Extreme Availability • Security Defense in Depth • Manageability Oracle Certified Database Administrators Reap Career Rewards Becoming an expert user of database technology through Oracle University's certification program widens your skill set to demonstrate your expertise implementing the most advanced database technology available. By doing so, you'll make yourself a more marketable employee and candidate in the job market.  Reasons to Become an Oracle Certified Database Administrator of Oracle Database 12c: • The new Oracle Database 12c certifications emphasize more advanced skills that align with industry standards, resulting in an even more valuable credential for customers and partners. • The Oracle Certified Associate (OCA) for Oracle Database 12c centers upon certification objectives that measure IT professionals' day-to-day skills, along with your ability to manage challenges. • Building upon all of the competencies incorporated into Oracle's Database 12c OCA certification, the Oracle Certified Professional (OCP) for Oracle Database 12c certification includes advanced knowledge and skills required of top-performing database administrators. • The Oracle Certified Master (OCM) for Oracle Database 12c - a very challenging and elite top-level certification - certifies the most highly skilled and experienced database experts. • Oracle offers 12c upgrade paths for existing Oracle Certified Professionals (OCP) and Oracle Certified Masters (OCM). Database 12c Training and Certification: Built with Your Input When creating Oracle Database 12c training courses and certifications, we wanted to know which tasks are most important in a DBA's day-to-day work. Instead of assuming what those tasks might be, we decided to develop a job task analysis survey for DBAs. The response rate from DBAs from around the world was overwhelming! The survey focused on the following key database areas: • DBA Core Essentials • Database Storage • High Availability • Scalability • Networking • Security • Very Large Database Administration • Distributed Databases After conducting this survey, we took this specific feedback and used it to help mold the new Oracle Database 12c training and certification curriculum. The benefit to you? You now have access to Oracle Database 12c courses and certification exams that were created with your specific on-the-job tasks in mind. Explore Oracle Database 12c Training & Certification Today Investing in Oracle Database 12c training courses and certifications will help you develop a great deal of knowledge, experience and expertise. Explore our portfolio of offerings to determine which skills you need as a DBA to get up-to-speed on Oracle Database 12c technology. Questions or comments about the new Oracle Database 12c offerings? Let us know in the comments below. - Diana Gray, Principle Curriculum Product Manager and Raza Siddiqui, Senior Principle Curriculum Product Manager

    Read the article

  • updating changes from one database to another database in the same server

    - by Pavan Kumar
    I have a copy of client database say 'DBCopy' which already contains modified data. The copy of the client database (DBCopy) is attached to the SQL Server where the Central Database (DBCentral) exists. Then I want to update whatever changes already present in DBCopy to DBCentral. Both DBCopy and DBCentral have same schema. How can i do it programatically using C#.NET maybe with a button click. Can you give me an example code as how to do it?. I am using SQL Server 2005 Standard Edition and VS 2008 SP1. In the actual scenario there are about 7 client database all with same schema as the central database. I am bringing copy of each client database and attach it to Central Server where the central database resides and try to update changes present in each copy of the client database to central database one by one programatically using C# .NET . The clients and the central server are physically seperate machines present in different places. They are not interconnected. I need to only update and insert new data. I am not bothered about deletion of data. Thanks and regards Pavan

    Read the article

  • SQL SERVER – Move Database Files MDF and LDF to Another Location

    - by pinaldave
    When a novice DBA or Developer create a database they use SQL Server Management Studio to create new database. Additionally, the T-SQL script to create a database is very easy as well. You can just write CREATE DATABASE DatabaseName and it will create new database for you. The point to remember here is that it will create the database at the default location specified for SQL Server Instance (this default instance can be changed and we will see that in future blog posts). Now, once the database goes in production it will start to grow. It is not common to keep the Database on the same location where OS is installed. Usually Database files are on SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. Let us assume we have two folders loc1 and loc2. We want to move database files from loc1 to loc2. USE MASTER; GO -- Take database in single user mode -- if you are facing errors -- This may terminate your active transactions for database ALTER DATABASE TestDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO -- Detach DB EXEC MASTER.dbo.sp_detach_db @dbname = N'TestDB' GO Now move the files from loc1 to loc2. You can now reattach the files with new locations. -- Move MDF File from Loc1 to Loc 2 -- Re-Attached DB CREATE DATABASE [TestDB] ON ( FILENAME = N'F:\loc2\TestDB.mdf' ), ( FILENAME = N'F:\loc2\TestDB_log.ldf' ) FOR ATTACH GO Well, we are done. There is little warning here for you: If you do ROLLBACK IMMEDIATE you may terminate your active transactions so do not use it randomly. Do it if you are confident that they are not needed or due to any reason there is a connection to the database which you are not able to kill manually after review. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Database Snapshot in Sql Server 2005

    A database snapshot is a read-only, static view of a database (called the source database). Each database snapshot is transactionally consistent with the source database at the moment of the snapshot's creation. When you create a database snapshot, the source database will typically have open transactions. Before the snapshot becomes available, the open transactions are rolled back to make the database snapshot transactionally consistent.

    Read the article

  • Firebird 2.1: gfix -online returns "database shutdown"

    - by darvids0n
    Hey all. Googling this one hasn't made a bit of difference, unfortunately, as most results specify the syntax for onlining a database after using gfix -shut -force 30 (or any other number of seconds) as gfix -online dbname, and I have run gfix -online dbname with and without login credentials for the DB in question. The message that I get is: database dbname shutdown Which is fine, except that I want to bring it online now. It's out of the question to close fbserver.exe (running on a Windows box, afaik it's Classic Server 2.1.1 but it may be Super) since we have other databases running off of that which need almost 24/7 uptime. The message from doing another gfix -shut -force or -attach or -tran is invalid shutdown mode for dbname which appears to match with the documentation of what happens if the database is already fully shut down. Ideas and input greatly appreciated, especially since at the moment time is a factor for me. Thanks! EDIT: The whole reason I shut down the DB is to clear out "active" transactions which were linked to a specific IP address, and that computer is my dev terminal (actually a virtual machine where I develop frontends for the database software) but I had no processes connecting to the database at the time. They looked like orphaned transactions to me, and they weren't in limbo afaik. Running a manual sweep didn't clear them out, deleting the rows from MON$STATEMENTS didn't work even though Firebird 2.1 supposedly supports cancelling queries that way. My last resort was to "restart" the database, hence the above issue.

    Read the article

  • Sprinkle Some Magik on that Java Virtual Machine

    - by Jim Connors
    GE Energy, through its Smallworld subsidiary, has been providing geospatial software solutions to the utility and telco markets for over 20 years.  One of the fundamental building blocks of their technology is a dynamically-typed object oriented programming language called Magik.  Like Java, Magik source code is compiled down to bytecodes that run on a virtual machine -- in this case the Magik Virtual Machine. Throughout the years, GE has invested considerable engineering talent in the support and maintenance of this virtual machine.  At the same time vast energy and resources have been invested in the Java Virtual Machine. The question for GE has been whether to continue to make that investment on its own or to leverage massive effort provided by the Java community? Utilizing the Java Virtual Machine instead of maintaining its own virtual machine would give GE more opportunity to focus on application solutions.   At last count, there are dozens, perhaps hundreds of examples of programming languages that have been hosted atop the Java Virtual Machine.  Prior to the release of Java 7, that effort, although certainly possible, was generally less than optimal for languages like Magik because of its dynamic nature.  Java, as a statically typed language had little use for this capability.  In the quest to be a more universal virtual machine, Java 7, via JSR-292, introduced a new bytecode called invokedynamic.  In short, invokedynamic affords a more flexible method call mechanism needed by dynamic languages like Magik. With this new capability GE Energy has succeeded in hosting their Magik environment on top of the Java Virtual Machine.  So you may ask, why would GE wish to do such a thing?  The benefits are many: Competitors to GE Energy claimed that the Magik environment was proprietary.  By utilizing the Java Virtual Machine, that argument gets put to bed.  JVM development is done in open source, where contributions are made world-wide by all types of organizations and individuals. The unprecedented wealth of class libraries and applications written for the Java platform are now opened up to Magik/JVM platform as first class citizens. In addition, the Magik/JVM solution vastly increases the developer pool to include the 9 million Java developers -- the largest developer community on the planet. Applications running on the JVM showed substantial performance gains, in some cases as much as a 5x speed up over the original Magik platform. Legacy Magik applications can still run on the original platform.  They can be seamlessly migrated to run on the JVM by simply recompiling the source code. GE can now leverage the huge Java community.  Undeniably the best virtual machine ever created, hundreds if not thousands of world class developers continually improve, poke, prod and scrutinize all aspects of the Java platform.  As enhancements are made, GE automatically gains access to these. As Magik has little in the way of support for multi-threading, GE will benefit from current and future Java offerings (e.g. lambda expressions) that aim to further facilitate multi-core/multi-threaded application development. As the JVM is available for many more platforms, it broadens the reach of Magik, including the potential to run on a class devices never envisioned just a few short years ago.  For example, Java SE compatible runtime environments are available for popular embedded ARM/Intel/PowerPC configurations that could theoretically host this software too. As compared to other JVM language projects, the Magik integration differs in that it represents a serious commercial entity betting a sizable part of its business on the success of this effort.  Expect to see announcements not only from General Electric, but other organizations as they realize the benefits of utilizing the Java Virtual Machine.

    Read the article

  • Is Oracle Database Appliance (ODA) A Best Kept Secret?

    - by Ravi.Sharma
    There is something about Oracle Database Appliance that underscores the tremendous value customers see in the product. Repeat purchases. When you buy “one” of something and come back to buy another, it confirms that the product met your expectations, you found good value in it, and perhaps you will continue to use it. But when you buy “one” and come back to buy many more on your very next purchase, it tells something else. It tells that you truly believe that you have found the best value out there. That you are convinced! That you are sold on the great idea and have discovered a product that far exceeds your expectations and delivers tremendous value! Many Oracle Database Appliance customers are such larger-volume-repeat-buyers. It is no surprise, that the product has a deeper penetration in many accounts where a customer made an initial purchase. The value proposition of Oracle Database Appliance is undeniably strong and extremely compelling. This is especially true for customers who are simply upgrading or “refreshing” their hardware (and reusing software licenses). For them, the ability to acquire world class, highly available database hardware along with leading edge management software and all of the automation is absolutely a steal. One customer DBA recently said, “Oracle Database Appliance is the best investment our company has ever made”. Such extreme statements do not come out of thin air. You have to experience it to believe it. Oracle Database Appliance is a low cost product. Not many sales managers may be knocking on your doors to sell it. But the great value it delivers to small and mid-size businesses and database implementations should not be underestimated. 

    Read the article

  • Should one use a separate database for application data and user data?

    - by trycatch
    I’ve been working on a project for a little while and I’m unsure which is the better architecture. I’m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. The TL;DR is: how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? The detailed version is.. if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn’t feel right, and I can't pick out quite why.

    Read the article

  • Is CUDA, cuBLAS or cuBLAS-XT the right place to start with for machine learning?

    - by Stefan R. Falk
    I am not sure if this is the right forum to post this question - but it surely is no question for stackoverflow. I work on my bachelor thesis and therefore I am implementing a so called Echo-State Network which basically is an artificial neural network that has a large reservoir of randomly initialized neurons and just a few input and output neurons .. but I think we can skip that. The thing is, there is a Python library called Theano which I am using for this implementation. It encapsulates the CUDA API and offers a quiet "comfortable" way to access the power of a NVIDIA graphics card. Since CUDA 6.0 there is a sub-library called cuBLAS (Basic Linear Algebra Subroutines) for LinAlg operations and also a cuBLAS-XT an extention which allows to run calculations on multiple graphics cards. My question at this point is if it would make sense to start using cuBLAS and/or cuBLAS-XT right now since the API is quite complex or rather wait for libraries that will build up on those library (such as Theano does on basic CUDA)? If you think this is the wrong place for this question please tell me which one is, thank you.

    Read the article

  • A good machine learning technique to weed out good URLs from bad

    - by git-noob
    Hi, I have an application that needs to discriminate between good HTTP GET requests and bad. For example: http://somesite.com?passes=dodgy+parameter # BAD http://anothersite.com?passes=a+good+parameter # GOOD My system can make a binary decision about whether or not a URL is good or bad - but ideally I would like it to predict whether or not a previously unseen URL is good or bad. http://some-new-site.com?passes=a+really+dodgy+parameter # BAD I feel the need for a support vector machine (SVM) ... but I need to learn machine learning. Some questions: 1) Is an SVM appropriate for this task? 2) Can I train it with the raw URLs? - without explicitly specifying 'features' 3) How many URLs will I need for it to be good at predictions? 4) What kind of SVM kernel should I use? 5) After I train it, how do I keep it up to date? 6) How do I test unseen URLs again the SVM to decide whether it's good or bad? I

    Read the article

  • Virtual Machine Manager Error - Error determing default hypervisor

    - by dallasclark
    I have Fedora 11 and trying to get Xen working (which I think it is already) but the Virtual Machine Manager cannot find the hypervisor. When starting Virtual Machine Manager, I receive the following error Error determining default hypervisor. Could not populate a default connection. Make sure the appropriate virtualization packages are installed (kvm, qemu, etc.) and that libvirtd has been restarted to notice the change. A hypervisor connection can be manually added via File - Add Connection I've restarted libvirtd a few times and tried connecting manually but can't work it out. Some useful information: # lsof | grep xen libvirtd 2962 root mem REG 253,0 19776 13379 /usr/lib/libxenstore.so.3.0.0 # service libvirtd status libvirtd (pid 2962) is running...

    Read the article

  • SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database

    - by Pinal Dave
    This is the third post in the series of the blog posts I am writing about NuoDB. NuoDB is very innovative and easy-to-use product. I can clearly see how one can scale-out NuoDB with so much ease and confidence. In my very first blog post we discussed how we can install NuoDB (link), and in my second post I discussed how we can manage the NuoDB database transaction engines and storage managers with a few clicks (link). Note: You can Download NuoDB from here. In this post, we will learn how we can use the Explorer feature of NuoDB to do various SQL operations. NuoDB has a browser-based Explorer, which is very powerful and has many of the features any IDE would normally have. Let us see how it works in the following step-by-step tutorial. Let us go to the NuoDBNuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the QuickStart screen. Make sure that you have created the sample database. If you have not created sample database, click on Create Database and create it successfully. Now go to the NuoDB Explorer by clicking on the main tab, and it will ask you for your domain username and password. Enter the username as a domain and password as a bird. Alternatively you can also enter username as a quickstart and password as a quickstart. Once you enter the password you will be able to see the databases. In our example we have installed the Sample Database hence you will see the Test database in our Database Hierarchy screen. When you click on database it will ask for the database login. Note that Database Login is different from Domain login and you will have to enter your database login over here. In our case the database username is dba and password is goalie. Once you enter a valid username and password it will display your database. Further expand your database and you will notice various objects in your database. Once you explore various objects, select any database and click on Open. When you click on execute, it will display the SQL script to select the data from the table. The autogenerated script displays entire result set from the database. The NuoDB Explorer is very powerful and makes the life of developers very easy. If you click on List SQL Statements it will list all the available SQL statements right away in Query Editor. You can see the popup window in following image. Here is the cool thing for geeks. You can even click on Query Plan and it will display the text based query plan as well. In case of a SELECT, the query plan will be much simpler, however, when we write complex queries it will be very interesting. We can use the query plan tab for performance tuning of the database. Here is another feature, when we click on List Tables in NuoDB Explorer.  It lists all the available tables in the query editor. This is very helpful when we are writing a long complex query. Here is a relatively complex example I have built using Inner Join syntax. Right below I have displayed the Query Plan. The query plan displays all the little details related to the query. Well, we just wrote multi-table query and executed it against the NuoDB database. You can use the NuoDB Admin section and do various analyses of the query and its performance. NuoDB is a distributed database built on a patented emergent architecture with full support for SQL and ACID guarantees.  It allows you to add Transaction Engine processes to a running system to improve the performance of your system.  You can also add a second Storage Engine to your running system for redundancy purposes.  Conversely, you can shut down processes when you don’t need the extra database resources. NuoDB also provides developers and administrators with a single intuitive interface for centrally monitoring deployments. If you have read my blog posts and have not tried out NuoDB, I strongly suggest that you download it today and catch up with the learnings with me. Trust me though the product is very powerful, it is extremely easy to learn and use. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Circular references in TFS Database Edition

    - by Jaco Pretorius
    I'm using TFS Database Edition to script a number of databases. Many of the databases have references between them - for example, view in database A might do select ... from B..TableX This works fine as long as database B is also a project in the solution. The problem comes in when I have objects in database A referencing database B and database B referencing objects in database A. It seems like Visual Studio needs to build the projects in order which is obviously not possible in this case. How do you deal with circular references between database projects in TFS database edition?

    Read the article

  • How to change hardcoded database paths in a Lotus Approach Database

    - by asoltys
    One of our Lotus Approach files (file extension .apr) has hardcoded paths to additional underlying database files (file extension .dbf). We've moved the Approach database and all the underlying files to a new network drive, so these paths need to be updated. You can see the old paths by opening the File menu and selecting "Approach File Properties", under the "Databases" heading. How can you change these paths?

    Read the article

  • Machine Learning Algorithm for Peer-to-Peer Nodes

    - by FreshCode
    I want to apply machine learning to a classification problem in a parallel environment. Several independent nodes, each with multiple on/off sensors, can communicate their sensor data with the goal of classifying an event as defined by a heuristic, training data or both. Each peer will be measuring the same data from their unique perspective and will attempt to classify the result while taking into account that any neighbouring node (or its sensors or just the connection to the node) could be faulty. Nodes should function as equal peers and determine the most likely classification by communicating their results. Ultimately each node should make a decision based on their own sensor data and their peers' data. If it matters, false positives are OK for certain classifications (albeit undesirable) but false negatives would be totally unacceptable. Given that each final classification will receive good or bad feedback, what would be an appropriate machine learning algorithm to approach this problem with if the nodes could communicate with each other to determine the most likely classification?

    Read the article

  • Machine Learning Algorithm for Parallel Nodes

    - by FreshCode
    I want to apply machine learning to a classification problem in a parallel environment. Several independent nodes, each with multiple on/off sensors, can communicate their sensor data with the goal of classifying an event defined by a heuristic, training data or both. Each peer will be measuring the same data from their unique perspective and will attempt to classify the result while taking into account that any neighbouring node (or its sensors or just the connection to the node) could be faulty. Nodes should function as equal peers and determine the most likely classification by communicating their results. Ultimately each node should make a decision based on their own sensor data and their peers' data. If it matters, false positives are OK (albeit undesirable) but false negatives are totally unacceptable. Given that each final classification will receive good or bad feedback, what would be an appropriate machine learning algorithm to approach this problem with if the nodes could communicate with each other to determine the most likely classification?

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • What's a good pure-python, document-based and flat-file database engine?

    - by joe Simpson
    Hi, for development i'd love to have a flat file database with the requirements up in the title, but I don't seem to be able to find a database with these requirements. I can't seem to get MetaKit to work. I only need it to work on the development machine, but in the real world my product will have more data and needs more room and will need something better. Does anyone know of a database engine capable of this or do I need to just use python's pickle and load and save a file? Joe

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >