Search Results

Search found 32391 results on 1296 pages for 'graph database'.

Page 33/1296 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Oracle Database 12c ?????????????????? (8/1 ????)

    - by OTN-J Master
    Oracle Database 12c???????????????????????????????7?17????????????????????????????????????????????????????????????: ”????”????????????????Oracle Database 12c? ??: ·?????????????? Oracle Database 12c ·???????????????? ·??????????????? ·?????????????????? ·?????????·????????????? ·Oracle Database 12c ??????? ??????????????????????????????????????(??????????????????)?Oracle Database 12c???????????????????????????¦ @IT (ITmedia) ????????????????????5????????!Oracle Database 12c????? ¦ ZDNet Japan ?????????????? ¦?????? ????+IT??????????????????????????????????????????????????????????????????“???????”??????

    Read the article

  • How can I use the Boost Graph Library to lay out verticies?

    - by Mike
    I'm trying to lay out vertices using the Boost Graph Library. However, I'm running into some compilation issues which I'm unsure about. Am I using the BGL in an improper manner? My code is: PositionVec position_vec(2); PositionMap position(position_vec.begin(), get(vertex_index, g)); int iterations = 100; double width = 100.0; double height = 100.0; minstd_rand gen; rectangle_topology<> topology(gen, 0, 0, 100, 100); fruchterman_reingold_force_directed_layout(g, position, topology); //Compile fails on this line The diagnostics produced by clang++(I've also tried GCC) are: In file included from test.cpp:2: /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:95:3: error: no member named 'dimensions' in 'boost::simple_point<double>' BOOST_STATIC_ASSERT (Point::dimensions == 2); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from test.cpp:2: In file included from /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:13: In file included from /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/graph_traits.hpp:15: In file included from /Volumes/Data/mike/Downloads/boost_1_43_0/boost/tuple/tuple.hpp:24: /Volumes/Data/mike/Downloads/boost_1_43_0/boost/static_assert.hpp:118:49: note: instantiated from: sizeof(::boost::STATIC_ASSERTION_FAILURE< BOOST_STATIC_ASSERT_BOOL_CAST( B ) >)>\ ^ In file included from test.cpp:2: /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:95:3: note: instantiated from: BOOST_STATIC_ASSERT (Point::dimensions == 2); ^ ~~~~~~~ /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:95:31: note: instantiated from: BOOST_STATIC_ASSERT (Point::dimensions == 2); ~~~~~~~^ /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:417:19: note: in instantiation of template class 'boost::grid_force_pairs<boost::rectangle_topology<boost::random::linear_congruential<int, 48271, 0, 2147483647, 399268537> >, boost::iterator_property_map<__gnu_cxx::__normal_iterator<boost::simple_point<double> *, std::vector<boost::simple_point<double>, std::allocator<boost::simple_point<double> > > >, boost::vec_adj_list_vertex_id_map<boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, unsigned long>, boost::simple_point<double>, boost::simple_point<double> &> >' requested here make_grid_force_pairs(topology, position, g)), ^ /Volumes/Data/mike/Downloads/boost_1_43_0/boost/graph/fruchterman_reingold.hpp:431:3: note: in instantiation of function template specialization 'boost::fruchterman_reingold_force_directed_layout<boost::rectangle_topology<boost::random::linear_congruential<int, 48271, 0, 2147483647, 399268537> >, boost::adjacency_list<boost::listS, boost::vecS, boost::undirectedS, boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, boost::no_property, boost::no_property, boost::listS>, boost::iterator_property_map<__gnu_cxx::__normal_iterator<boost::simple_point<double> *, std::vector<boost::simple_point<double>, std::allocator<boost::simple_point<double> > > >, boost::vec_adj_list_vertex_id_map<boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, unsigned long>, boost::simple_point<double>, boost::simple_point<double> &>, boost::square_distance_attractive_force, boost::attractive_force_t, boost::no_property>' requested here fruchterman_reingold_force_directed_layout ^ test.cpp:48:3: note: in instantiation of function template specialization 'boost::fruchterman_reingold_force_directed_layout<boost::rectangle_topology<boost::random::linear_congruential<int, 48271, 0, 2147483647, 399268537> >, boost::adjacency_list<boost::listS, boost::vecS, boost::undirectedS, boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, boost::no_property, boost::no_property, boost::listS>, boost::iterator_property_map<__gnu_cxx::__normal_iterator<boost::simple_point<double> *, std::vector<boost::simple_point<double>, std::allocator<boost::simple_point<double> > > >, boost::vec_adj_list_vertex_id_map<boost::property<boost::vertex_name_t, std::basic_string<char>, boost::no_property>, unsigned long>, boost::simple_point<double>, boost::simple_point<double> &> >' requested here fruchterman_reingold_force_directed_layout(g, position, topology); ^ 1 error generated.

    Read the article

  • finding shortest valid path in a colored-edge graphs

    - by user1067083
    Given a directed graph G, with edges colored either green or purple, and a vertex S in G, I must find an algorithm that finds the shortest path from s to each vertex in G so the path includes at most two purple edges (and green as much as needed). I thought of BFS on G after removing all the purple edges, and for every vertex that the shortest path is still infinity, do something to try to find it, but I'm kinda stuck, and it takes alot of the running time as well... Any other suggestions? Thanks in advance

    Read the article

  • Database OR Array

    - by rezoner
    What is the exact point of using external database system if I have simple relations (95% querries are dependant on ID). I am storing users and their stats. Why would I use external database if I can have neat constructions like: db.users[32] = something Array of 500K users is not that big effort for RAM Pros are: no problematic asynchronity (instant results) easy export/import dealing with database like with a native object LITERALLY ps. and considerations: Would it be faster or slower to do collection[3] than db.query("select ... I am going to store it as a file/s There is only ONE application/process accessing this data, and the code is executed line by line - please don't elaborate about locking. Please don't answer with database propositions but why to use external DB over native array/object - I have experience in a few databases - that's not the case. What I am building is a client/gateway/server(s) game. Gateway deals with all users data, processing, authenticating, writing statistics e.t.c No other part of software needs to access directly to this data/database.

    Read the article

  • Publish database between two open database connections (Visual Studio 2005)

    - by danielswe
    I have two data locations, one to a local and one to a remote database. How do I copy the local database schema to the remote? The reason I don't use "Publish to provider" is that I'm not sure that I have all the information necessary to do so. I have the database name, server, username and pass but not "web service address" nor "web service password". I work in Visual Studio 2005. The server is a MSSQL 2005 server. I have tried using the queries but I only get errors doing so.

    Read the article

  • Is it possible to detect that a database connection is to a copy rather than to the original database?

    - by user149238
    I have an application that needs to know if it is connected to the original database that it was installed with or if the connection is to a copy of that database. Is there any known method to know if the database has been cloned and the application is no longer connected to the original? I am specifically interested in MS SQL Server and Oracle. I was kicking ideas around for a stored procedure but that most likely doesn't have access to the hardware to confirm unique hardware information that would somewhat guarantee that the database is the one that it was originally connected to during installation. I'm trying to prevent/detect cloning of a database so that there is only 1 "true location of truth". Thanks!

    Read the article

  • Discount Codes Galore

    - by Cassandra Clark
    Saving money is at the top of everyones list right now. With this in mind the Oracle Technology Network team has compiled a list of discounts available at the Oracle Store. We are also introducing an Oracle Technology Network member discount from O'Reilly Media. If you subscribe to any of the Oracle Technology newsletters you also saw special discounts from CRC Press, Packt Publishing and Apress. We are going to do our best to bring you more offers like this every month. Now on to the discounts... Oracle Store offers - all below expiring May 31st 2010. Don't miss out! Expand Your Productivity with Oracle Open Office and Save 15%? Enter OTNOffice at checkout. Buy Now! Drive Business Agility and Performance with Industry-leading Oracle Database Management Packs.  Save 10% when you purchase them at the Oracle Store. Enter OTNDBMP at checkout. Buy Now! 15% Savings on Oracle Virtualization and Unbreakable Linux Support at the Oracle Store Enter code OTNLinuxVM at checkout. Buy Now! 20% Savings on Oracle SQL Developer Data Modeler Use OTNSQL at checkout. Buy Now! O'Reilly Oracle Technology Network Member Offer O'Reilly is generously offering Oracle Technology Network Members 35% off for print books and 40% off of eBooks. Browse Oracle titles at- http://oreilly.com/pub/topic/oracle. Use discount code TECNT at checkout.

    Read the article

  • DB Schema for ACL involving 3 subdomains

    - by blacktie24
    Hi, I am trying to design a database schema for a web app which has 3 subdomains: a) internal employees b) clients c) contractors. The users will be able to communicate with each other to some degree, and there may be some resources that overlap between them. Any thoughts about this schema? Really appreciate your time and thoughts on this. Cheers! -- -- Table structure for table locations CREATE TABLE IF NOT EXISTS locations ( id bigint(20) NOT NULL, name varchar(250) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Table structure for table privileges CREATE TABLE IF NOT EXISTS privileges ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, resource_id int(11) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=10 ; -- -- Table structure for table resources CREATE TABLE IF NOT EXISTS resources ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, user_type enum('internal','client','expert') NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Table structure for table roles CREATE TABLE IF NOT EXISTS roles ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, type enum('position','department') NOT NULL, parent_id int(11) DEFAULT NULL, user_type enum('internal','client','expert') NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -- Table structure for table role_perms CREATE TABLE IF NOT EXISTS role_perms ( id int(11) NOT NULL AUTO_INCREMENT, role_id int(11) NOT NULL, privilege_id int(11) NOT NULL, mode varchar(250) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Table structure for table users CREATE TABLE IF NOT EXISTS users ( id int(10) unsigned NOT NULL AUTO_INCREMENT, email varchar(255) NOT NULL, password varchar(255) NOT NULL, salt varchar(255) NOT NULL, type enum('internal','client','expert') NOT NULL, first_name varchar(255) NOT NULL, last_name varchar(255) NOT NULL, location_id int(11) NOT NULL, phone varchar(255) NOT NULL, status enum('active','inactive') NOT NULL DEFAULT 'active', PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Table structure for table user_perms CREATE TABLE IF NOT EXISTS user_perms ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, privilege_id int(11) NOT NULL, mode varchar(250) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -- Table structure for table user_roles CREATE TABLE IF NOT EXISTS user_roles ( id int(11) NOT NULL, user_id int(11) NOT NULL, role_id int(11) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

    Read the article

  • Oracle Database 12 c Training and Certification: What’s in it for Me?

    - by KJones
    Oracle Database 12c has officially launched! Through Oracle University, our expert instructors can introduce you to the features and functions of this new Oracle Database 12c product. Through training courses and certification exam prep seminars, you can build up your database knowledge and apply this knowledge to advance your career. Already an Oracle Database Expert? Why Oracle Database 12c Training is still a Good Idea Oracle is the industry leader for database technology and takes the release of new products very seriously. We continue to listen to customer needs and add features and functionality to address those needs. Oracle Database 12c is no exception. The following areas have been greatly enhanced and should be considered for your additional training or certification: • Database for Cloud Computing • Compression and Information Lifecycle Management (ILM) • Improved Performance & Scalability • Extreme Availability • Security Defense in Depth • Manageability Oracle Certified Database Administrators Reap Career Rewards Becoming an expert user of database technology through Oracle University's certification program widens your skill set to demonstrate your expertise implementing the most advanced database technology available. By doing so, you'll make yourself a more marketable employee and candidate in the job market.  Reasons to Become an Oracle Certified Database Administrator of Oracle Database 12c: • The new Oracle Database 12c certifications emphasize more advanced skills that align with industry standards, resulting in an even more valuable credential for customers and partners. • The Oracle Certified Associate (OCA) for Oracle Database 12c centers upon certification objectives that measure IT professionals' day-to-day skills, along with your ability to manage challenges. • Building upon all of the competencies incorporated into Oracle's Database 12c OCA certification, the Oracle Certified Professional (OCP) for Oracle Database 12c certification includes advanced knowledge and skills required of top-performing database administrators. • The Oracle Certified Master (OCM) for Oracle Database 12c - a very challenging and elite top-level certification - certifies the most highly skilled and experienced database experts. • Oracle offers 12c upgrade paths for existing Oracle Certified Professionals (OCP) and Oracle Certified Masters (OCM). Database 12c Training and Certification: Built with Your Input When creating Oracle Database 12c training courses and certifications, we wanted to know which tasks are most important in a DBA's day-to-day work. Instead of assuming what those tasks might be, we decided to develop a job task analysis survey for DBAs. The response rate from DBAs from around the world was overwhelming! The survey focused on the following key database areas: • DBA Core Essentials • Database Storage • High Availability • Scalability • Networking • Security • Very Large Database Administration • Distributed Databases After conducting this survey, we took this specific feedback and used it to help mold the new Oracle Database 12c training and certification curriculum. The benefit to you? You now have access to Oracle Database 12c courses and certification exams that were created with your specific on-the-job tasks in mind. Explore Oracle Database 12c Training & Certification Today Investing in Oracle Database 12c training courses and certifications will help you develop a great deal of knowledge, experience and expertise. Explore our portfolio of offerings to determine which skills you need as a DBA to get up-to-speed on Oracle Database 12c technology. Questions or comments about the new Oracle Database 12c offerings? Let us know in the comments below. - Diana Gray, Principle Curriculum Product Manager and Raza Siddiqui, Senior Principle Curriculum Product Manager

    Read the article

  • updating changes from one database to another database in the same server

    - by Pavan Kumar
    I have a copy of client database say 'DBCopy' which already contains modified data. The copy of the client database (DBCopy) is attached to the SQL Server where the Central Database (DBCentral) exists. Then I want to update whatever changes already present in DBCopy to DBCentral. Both DBCopy and DBCentral have same schema. How can i do it programatically using C#.NET maybe with a button click. Can you give me an example code as how to do it?. I am using SQL Server 2005 Standard Edition and VS 2008 SP1. In the actual scenario there are about 7 client database all with same schema as the central database. I am bringing copy of each client database and attach it to Central Server where the central database resides and try to update changes present in each copy of the client database to central database one by one programatically using C# .NET . The clients and the central server are physically seperate machines present in different places. They are not interconnected. I need to only update and insert new data. I am not bothered about deletion of data. Thanks and regards Pavan

    Read the article

  • SQL SERVER – Move Database Files MDF and LDF to Another Location

    - by pinaldave
    When a novice DBA or Developer create a database they use SQL Server Management Studio to create new database. Additionally, the T-SQL script to create a database is very easy as well. You can just write CREATE DATABASE DatabaseName and it will create new database for you. The point to remember here is that it will create the database at the default location specified for SQL Server Instance (this default instance can be changed and we will see that in future blog posts). Now, once the database goes in production it will start to grow. It is not common to keep the Database on the same location where OS is installed. Usually Database files are on SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. Let us assume we have two folders loc1 and loc2. We want to move database files from loc1 to loc2. USE MASTER; GO -- Take database in single user mode -- if you are facing errors -- This may terminate your active transactions for database ALTER DATABASE TestDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO -- Detach DB EXEC MASTER.dbo.sp_detach_db @dbname = N'TestDB' GO Now move the files from loc1 to loc2. You can now reattach the files with new locations. -- Move MDF File from Loc1 to Loc 2 -- Re-Attached DB CREATE DATABASE [TestDB] ON ( FILENAME = N'F:\loc2\TestDB.mdf' ), ( FILENAME = N'F:\loc2\TestDB_log.ldf' ) FOR ATTACH GO Well, we are done. There is little warning here for you: If you do ROLLBACK IMMEDIATE you may terminate your active transactions so do not use it randomly. Do it if you are confident that they are not needed or due to any reason there is a connection to the database which you are not able to kill manually after review. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Database Snapshot in Sql Server 2005

    A database snapshot is a read-only, static view of a database (called the source database). Each database snapshot is transactionally consistent with the source database at the moment of the snapshot's creation. When you create a database snapshot, the source database will typically have open transactions. Before the snapshot becomes available, the open transactions are rolled back to make the database snapshot transactionally consistent.

    Read the article

  • Firebird 2.1: gfix -online returns "database shutdown"

    - by darvids0n
    Hey all. Googling this one hasn't made a bit of difference, unfortunately, as most results specify the syntax for onlining a database after using gfix -shut -force 30 (or any other number of seconds) as gfix -online dbname, and I have run gfix -online dbname with and without login credentials for the DB in question. The message that I get is: database dbname shutdown Which is fine, except that I want to bring it online now. It's out of the question to close fbserver.exe (running on a Windows box, afaik it's Classic Server 2.1.1 but it may be Super) since we have other databases running off of that which need almost 24/7 uptime. The message from doing another gfix -shut -force or -attach or -tran is invalid shutdown mode for dbname which appears to match with the documentation of what happens if the database is already fully shut down. Ideas and input greatly appreciated, especially since at the moment time is a factor for me. Thanks! EDIT: The whole reason I shut down the DB is to clear out "active" transactions which were linked to a specific IP address, and that computer is my dev terminal (actually a virtual machine where I develop frontends for the database software) but I had no processes connecting to the database at the time. They looked like orphaned transactions to me, and they weren't in limbo afaik. Running a manual sweep didn't clear them out, deleting the rows from MON$STATEMENTS didn't work even though Firebird 2.1 supposedly supports cancelling queries that way. My last resort was to "restart" the database, hence the above issue.

    Read the article

  • Collaborate10 &ndash; THEconference

    - by jean-pierre.dijcks
    After spending a few days in Mandalay Bay's THEHotel, I guess I now call everything THE... Seriously, they even tag their toilet paper with THEtp... I guess the brand builders in Vegas thought that once you are on to something you keep on doing it, and granted it is a nice hotel with nice rooms. THEanalytics Most of my collab10 experience was in a room called Reef C, where the BIWA bootcamp was held. Two solid days of BI, Warehousing and Analytics organized by the BIWA SIG at IOUG. Didn't get to see all sessions, but what struck me was the high interest in Analytics. Marty Gubar's OLAP session was full and he did some very nice things with the OLAP option. The cool bit was that he actually gets all the advanced calculations in OLAP to show up in OBI EE without any effort. It was nice to see that the idea from OWB where you generate an RPD is now also in AWM. I think it makes life so much simpler to generate these RPD's from your data model. Even if the end RPD needs some tweaking, it is all a lot less effort to get something going. You can see this stuff for yourself in this demo (click here). OBI EE uses just SQL to get to the calculations, and so, if you prefer APEX, you can build you application there and get the same nice calculations in an APEX application. Marty also showed the Simba MDX driver used with Excel. I guess we should call that THEcoolone... and it is very slick and wonderfully useful for all of you who actually know Excel. The nice thing is that you leverage pure Excel for all operations (no plug-ins). That means no new tools to learn, no new controls, all just pure Excel. THEdatabasemachine Got some very good questions in my "what makes Exadata fast" session and overall, the interest in Exadata is overwhelming. One of the things that I did try to do in my session is to get people to think in new patterns rather than in patterns based on Oracle 9i running on some random hardware configuration. We talked a little bit about the often over-indexing and how everyone has to unlearn all of that on Exadata. The main thing however is that everyone needs to get used to the shear size of some of the components in a Database machine V2. 5TB of flash cache is a lot of very fast data storage, half a TB of memory gets quite interesting as well. So what I did there was really focus on some of the content in these earlier posts on Upward ILM and In-Memory processing. In short, I do believe the these newer media point out a trend. In-memory and other fast media will get cheaper and will see more use. Some of that we do automatically by adding new functionality, but in some cases I think the end user of the system needs to start thinking about how to leverage all this new hardware. I think most people got very excited about these new capabilities and opportunities. THEcoolkids One of the cool things about the BIWA track was the hand-on track. Very cool to see big crowds for both OLAP and OWB hands-on. Also quite nice to see that the folks at RittmanMead spent so much time on preparing for that session. While all of them put down cool stuff, none was more cool that seeing Data Mining on an Apple iPAD... it all just looks great on an iPAD! Very disappointing to see that Mark Rittman still wasn't showing OWB on his iPAD ;-) THEend All in all this was a great set of sessions in the BIWA track. Lots of value to our guests (we hope) and we hope they all come again next year!

    Read the article

  • Is Oracle Database Appliance (ODA) A Best Kept Secret?

    - by Ravi.Sharma
    There is something about Oracle Database Appliance that underscores the tremendous value customers see in the product. Repeat purchases. When you buy “one” of something and come back to buy another, it confirms that the product met your expectations, you found good value in it, and perhaps you will continue to use it. But when you buy “one” and come back to buy many more on your very next purchase, it tells something else. It tells that you truly believe that you have found the best value out there. That you are convinced! That you are sold on the great idea and have discovered a product that far exceeds your expectations and delivers tremendous value! Many Oracle Database Appliance customers are such larger-volume-repeat-buyers. It is no surprise, that the product has a deeper penetration in many accounts where a customer made an initial purchase. The value proposition of Oracle Database Appliance is undeniably strong and extremely compelling. This is especially true for customers who are simply upgrading or “refreshing” their hardware (and reusing software licenses). For them, the ability to acquire world class, highly available database hardware along with leading edge management software and all of the automation is absolutely a steal. One customer DBA recently said, “Oracle Database Appliance is the best investment our company has ever made”. Such extreme statements do not come out of thin air. You have to experience it to believe it. Oracle Database Appliance is a low cost product. Not many sales managers may be knocking on your doors to sell it. But the great value it delivers to small and mid-size businesses and database implementations should not be underestimated. 

    Read the article

  • Should one use a separate database for application data and user data?

    - by trycatch
    I’ve been working on a project for a little while and I’m unsure which is the better architecture. I’m interested in the consensus. The answer to me seems fairly obvious but something about it is digging at me and I can't pick out what. The TL;DR is: how do you handle a program with application data and user data in the same DB which needs to be able to receive updates to the application data periodically? One database for user data and one for application, or both in one? The detailed version is.. if an application has a database which needs to maintain application data AND user data, and the user data all references application data, it feels more natural to me to store them in the same database. But if there exists a need to be able to update the application data within this database periodically, should this be stripped into two databases so that one can simply download the updated application data database file as an update and replace the old one? Or should they remain as one database, and the application data be updated via a script which inserts the new data into the existing database? The second sounds clearly preferable to me... but for some reason just doesn’t feel right, and I can't pick out quite why.

    Read the article

  • SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database

    - by Pinal Dave
    This is the third post in the series of the blog posts I am writing about NuoDB. NuoDB is very innovative and easy-to-use product. I can clearly see how one can scale-out NuoDB with so much ease and confidence. In my very first blog post we discussed how we can install NuoDB (link), and in my second post I discussed how we can manage the NuoDB database transaction engines and storage managers with a few clicks (link). Note: You can Download NuoDB from here. In this post, we will learn how we can use the Explorer feature of NuoDB to do various SQL operations. NuoDB has a browser-based Explorer, which is very powerful and has many of the features any IDE would normally have. Let us see how it works in the following step-by-step tutorial. Let us go to the NuoDBNuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the QuickStart screen. Make sure that you have created the sample database. If you have not created sample database, click on Create Database and create it successfully. Now go to the NuoDB Explorer by clicking on the main tab, and it will ask you for your domain username and password. Enter the username as a domain and password as a bird. Alternatively you can also enter username as a quickstart and password as a quickstart. Once you enter the password you will be able to see the databases. In our example we have installed the Sample Database hence you will see the Test database in our Database Hierarchy screen. When you click on database it will ask for the database login. Note that Database Login is different from Domain login and you will have to enter your database login over here. In our case the database username is dba and password is goalie. Once you enter a valid username and password it will display your database. Further expand your database and you will notice various objects in your database. Once you explore various objects, select any database and click on Open. When you click on execute, it will display the SQL script to select the data from the table. The autogenerated script displays entire result set from the database. The NuoDB Explorer is very powerful and makes the life of developers very easy. If you click on List SQL Statements it will list all the available SQL statements right away in Query Editor. You can see the popup window in following image. Here is the cool thing for geeks. You can even click on Query Plan and it will display the text based query plan as well. In case of a SELECT, the query plan will be much simpler, however, when we write complex queries it will be very interesting. We can use the query plan tab for performance tuning of the database. Here is another feature, when we click on List Tables in NuoDB Explorer.  It lists all the available tables in the query editor. This is very helpful when we are writing a long complex query. Here is a relatively complex example I have built using Inner Join syntax. Right below I have displayed the Query Plan. The query plan displays all the little details related to the query. Well, we just wrote multi-table query and executed it against the NuoDB database. You can use the NuoDB Admin section and do various analyses of the query and its performance. NuoDB is a distributed database built on a patented emergent architecture with full support for SQL and ACID guarantees.  It allows you to add Transaction Engine processes to a running system to improve the performance of your system.  You can also add a second Storage Engine to your running system for redundancy purposes.  Conversely, you can shut down processes when you don’t need the extra database resources. NuoDB also provides developers and administrators with a single intuitive interface for centrally monitoring deployments. If you have read my blog posts and have not tried out NuoDB, I strongly suggest that you download it today and catch up with the learnings with me. Trust me though the product is very powerful, it is extremely easy to learn and use. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Circular references in TFS Database Edition

    - by Jaco Pretorius
    I'm using TFS Database Edition to script a number of databases. Many of the databases have references between them - for example, view in database A might do select ... from B..TableX This works fine as long as database B is also a project in the solution. The problem comes in when I have objects in database A referencing database B and database B referencing objects in database A. It seems like Visual Studio needs to build the projects in order which is obviously not possible in this case. How do you deal with circular references between database projects in TFS database edition?

    Read the article

  • Am I right about the differences between Floyd-Warshall, Dijkstra's and Bellman-Ford algorithms?

    - by Programming Noob
    I've been studying the three and I'm stating my inferences from them below. Could someone tell me if I have understood them accurately enough or not? Thank you. Dijkstra's algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails in cases like this Floyd-Warshall's algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative cycles (this is the most important one. I mean, this is the one I'm least sure about:) 3.Bellman-Ford is used like Dijkstra's, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall's except for one source, right? If you need to have a look, the corresponding algorithms are (courtesy Wikipedia): Bellman-Ford: procedure BellmanFord(list vertices, list edges, vertex source) // This implementation takes in a graph, represented as lists of vertices // and edges, and modifies the vertices so that their distance and // predecessor attributes store the shortest paths. // Step 1: initialize graph for each vertex v in vertices: if v is source then v.distance := 0 else v.distance := infinity v.predecessor := null // Step 2: relax edges repeatedly for i from 1 to size(vertices)-1: for each edge uv in edges: // uv is the edge from u to v u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: v.distance := u.distance + uv.weight v.predecessor := u // Step 3: check for negative-weight cycles for each edge uv in edges: u := uv.source v := uv.destination if u.distance + uv.weight < v.distance: error "Graph contains a negative-weight cycle" Dijkstra: 1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: // Initializations 3 dist[v] := infinity ; // Unknown distance function from 4 // source to v 5 previous[v] := undefined ; // Previous node in optimal path 6 // from source 7 8 dist[source] := 0 ; // Distance from source to source 9 Q := the set of all nodes in Graph ; // All nodes in the graph are 10 // unoptimized - thus are in Q 11 while Q is not empty: // The main loop 12 u := vertex in Q with smallest distance in dist[] ; // Start node in first case 13 if dist[u] = infinity: 14 break ; // all remaining vertices are 15 // inaccessible from source 16 17 remove u from Q ; 18 for each neighbor v of u: // where v has not yet been 19 removed from Q. 20 alt := dist[u] + dist_between(u, v) ; 21 if alt < dist[v]: // Relax (u,v,a) 22 dist[v] := alt ; 23 previous[v] := u ; 24 decrease-key v in Q; // Reorder v in the Queue 25 return dist; Floyd-Warshall: 1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j 2 (infinity if there is none). 3 Also assume that n is the number of vertices and edgeCost(i,i) = 0 4 */ 5 6 int path[][]; 7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path 8 from i to j using intermediate vertices (1..k-1). Each path[i][j] is initialized to 9 edgeCost(i,j). 10 */ 11 12 procedure FloydWarshall () 13 for k := 1 to n 14 for i := 1 to n 15 for j := 1 to n 16 path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );

    Read the article

  • How to change hardcoded database paths in a Lotus Approach Database

    - by asoltys
    One of our Lotus Approach files (file extension .apr) has hardcoded paths to additional underlying database files (file extension .dbf). We've moved the Approach database and all the underlying files to a new network drive, so these paths need to be updated. You can see the old paths by opening the File menu and selecting "Approach File Properties", under the "Databases" heading. How can you change these paths?

    Read the article

  • What are graphs in laymen's terms

    - by Justin984
    What are graphs, in computer science, and what are they used for? In laymen's terms preferably. I have read the definition on Wikipedia: In computer science, a graph is an abstract data type that is meant to implement the graph and hypergraph concepts from mathematics. A graph data structure consists of a finite (and possibly mutable) set of ordered pairs, called edges or arcs, of certain entities called nodes or vertices. As in mathematics, an edge (x,y) is said to point or go from x to y. The nodes may be part of the graph structure, or may be external entities represented by integer indices or references. but I'm looking for a less formal, easier to understand definition.

    Read the article

  • Designing binary operations(AND, OR, NOT) in graphs DB's like neo4j

    - by Nicholas
    I'm trying to create a recipe website using a graph database, specifically neo4j using spring-data-neo4j, to try and see what can be done in Graph Databases. My model so far is: (Chef)-[HAS_INGREDIENT]->(Ingredient) (Chef)-[HAS_VALUE]->(Value) (Ingredient)-[HAS_INGREDIENT_VALUE]->(Value) (Recipe)-[REQUIRES_INGREDIENT]->(Ingredient) (Recipe)-[REQUIRES_VALUE]->(Value) I have this set up so I can do things like have the "chef" enter ingredients they have on hand, and suggest recipes, as well as suggest recipes that are close matches, but missing one ingredient. Some recipes can get complex, utilizing AND, OR, and NOT type logic, something like (Milk AND (Butter OR spread OR (vegetable oil OR olive oil))) and I'm wondering if it would be sane to model this in a graph using a tree type representation? An example of what I was thinking is to create three "node" types of AND, OR, and NOT and have each of them connect to the nodes value underneath. How else might this be represented in a Graph Database or is my example above a decent representation?

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >