Search Results

Search found 30606 results on 1225 pages for 'database relations'.

Page 5/1225 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How can an SQL relational database be used to model a thesaurus? [closed]

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus: a long list of words with attributes, all of which are linked to each other. This thesaurus data model can be defined as: a controlled vocabulary arranged in a known order in which equivalence, hierarchical, and associative relationships among terms are clearly displayed and identified by standardized relationship indicators. My idea so far is to have one database in which every word is a table, and every table contains all words related to that word. e.g. Thesaurus(database) - happy(table) - excited(row)|cheerful(row)|lively(row) Is there are more efficient way to store words and their relationship to other words in a relational SQL database?

    Read the article

  • To join or not to join - database structure

    - by Industrial
    Hi! We're drawing up the database structure with the help of mySQL Workbench for a new app and the number of joins required to make a listing of the data is increasing drastically as the many-to-many relationships increases. The questions: Is it really that bad to merge tables where needed and thereby reducing joins? Is there a better way then pivot tables to take care of many-to-many relationships? We discussed about instead storing all data in serialized text columns and having the application make the sorting instead of the database, but this seems like a very bad idea, even though that the database will be heavily cached. What do you think? Thanks!

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Should database-models (conceptual or physical) be reviewed by DBAs?

    - by user61852
    Where I work, new applications that are being developed that will use their own relational database, must have their database-models (conceptual, then physical ) reviewed and aproved by DBAs. Things looked after are normalization, antipatterns, table and column naming standards, etc. Is this really a DBA's responsability to do this ? or should it be, in a greater extend, the responsability of app designers and architects ?

    Read the article

  • What would a database look like if it were normalized to be completely abstracted? lets call it Max(n) normal form

    - by Doug Chamberlain
    edit: By simplest form i was not implying that it would be easy to understand. For instance, developing in low level assembly language is the simplest way to can develop code, but it is far from the easiest. Essentially, what I am asking is in math you can simplify a fraction to a point where it can no longer be simplfied. Can the same be true for a database and what would a database look like in its simplest, form?

    Read the article

  • PostgreSQL 9.1 Database Replication Between Two Production Environments with Load Balancer

    - by littleK
    I'm investigating different solutions for database replication between two PostgreSQL 9.1 databases. The setup will include two production servers on the cloud (Amazon EC2 X-Large Instances), with an elastic load balancer. What is the typical database implementation for for this type of setup? A master-master replication (with Bucardo or rubyrep)? Or perhaps use only one shared database between the two environments, with a shared disk failover? I've been getting some ideas from http://www.postgresql.org/docs/9.0/static/different-replication-solutions.html. Since I don't have a lot of experience in database replication, I figured I would ask the experts. What would you recommend for the described setup?

    Read the article

  • How should I copy the "mysql" database to my new server using PHPMyAdmin

    - by undefined
    My new webhosting company has set up a MySQL database for me and it has the tables MySQL and Information_schema already there. I want to copy my existing database from another server (a) to the new one (b). I assume I need to overwrite the 'mysql' database on server (b) with the one from my existing server (a) or atleast copy over the permissions. 1) What information does the mysql database hold? users and permissions I can see, does it have the login info for phpMyAdmin? I dont want to overwrite that obviously. 2) Should I drop the table on server (b) and import my original? 3) Should I just copy the users table? 4) Do I need to worry about the information_schema table? should I copy this over too? thanks

    Read the article

  • Help with understanding generic relations in Django (and usage in Admin)

    - by saturdayplace
    I'm building a CMS for my company's website (I've looked at the existing Django solutions and want something that's much slimmer/simpler, and that handles our situation specifically.. Plus, I'd like to learn this stuff better). I'm having trouble wrapping my head around generic relations. I have a Page model, a SoftwareModule model, and some other models that define content on our website, each with their get_absolute_url() defined. I'd like for my users to be able to assign any Page instance a list of objects, of any type, including other page instances. This list will become that Page instance's sub-menu. I've tried the following: class Page(models.Model): body = models.TextField() links = generic.GenericRelation("LinkedItem") @models.permalink def get_absolute_url(self): # returns the right URL class LinkedItem(models.Model): content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') title = models.CharField(max_length=100) def __unicode__(self): return self.title class SoftwareModule(models.Model): name = models.CharField(max_length=100) description = models.TextField() def __unicode__(self): return self.name @models.permalink def get_absolute_url(self): # returns the right URL This gets me a generic relation with an API to do page_instance.links.all(). We're on our way. What I'm not sure how to pull off, is on the page instance's change form, how to create the relationship between that page, and any other extant object in the database. My desired end result: to render the following in a template: <ul> {% for link in page.links.all %} <li><a href='{{ link.content_object.get_absolute_url() }}'>{{ link.title }}</a></li> {% endfor%} </ul> Obviously, there's something I'm unaware of or mis-understanding, but I feel like I'm, treading into that area where I don't know what I don't know. What am I missing?

    Read the article

  • Oracle Database 12c ????????(?????)

    - by OTN-J Master
    Oracle Database 12c????????????? ?????????????????????????????????????¦ ??????????????? ????????? Oracle Database 12c????OTN??? Oracle Database 12c???? ¦ Oracle Database 12c???????????????? Oracle Database 12c ?????? (PDF)¦ Oracle Database 12c???????????? ??????????????? Oracle Database 12c (??????????????) "Oracle Database 12c???"??????????????????? ??????NEC???????????????? ???????????????????????¦ Oracle Database 12c??????????????????? ???????????Oracle Database 12c ????? (PC/????????????!) ????????????????????????????? ?????????????????! Oracle Database 12c???????? (@IT /Database Expert) ??????Oracle Database 12c????????·?????????????????? (EnterpriseZine/DB Online) ??????????!Oracle Database 12c???? (EnterpriseZine/DB Online) ¦ Oracle Database 12c?????????????????????? ??????????Oracle Database 12c ?????!  (EnterpriseZine/DB Online) ¦ Oracle Database 12c???????????? ????????? OTN???????????????????????????????????????????? ?????????/??????? ~????????????????12c??????????!~ Oracle Database 12c????????!???????????? ¦ Oracle Database 12c??????????????????? Oracle University?? Oracle Database 12c: ?????? ¦ Oracle Database 12c?????????????????????????? ?????????????????? ¦ Oracle Database 12c????????????????? ?????????????????????????????????????????(??)???????????????????????????OTN Community(??????)?????????OTN Community??? ¦ Oracle Database 12c???????????????? 12c????????????????????????OTN????????????????Twitter(@oracletechnetjp)???????????????????????! ????????????????????????????Oracle Database 12c???????????????(?8???????????&??????????)????????

    Read the article

  • 2?????????????(Database??)

    - by rika.tokumichi
    ???????????OTN????????? ??????????????????????????????????????????????????????? ????????????????????????????????????????? ???Database??????????????2?????????????????????????????????? ??????????? 1?:Oracle SQL Developer 2.1 (2.1.0.63.73)?Download? 2?:Oracle Database 11g Release 1?Download? 3?:Oracle Database 10g Express Edition?Download? 4?:Oracle Database 10g Release 2?Download? 5?:Oracle Database 11g Release 2?Download? (????2?1?~2?28?) ??????1??2??????????! ?????TOP5?????????????????????????? ??12????????????????????????????? ???Oracle Database 11g Release2?????Grid Infrastructure???? ??Grid Infrastructure??????????Oracle Clusterware?Oracle Automatic Storage Management(ASM)???????? ??????·????????????????????????????????????????????? ????????????????????OTN???????????????????????????????? >?????:Oracle Database 11g R2?????Oracle VM???????????? ??10?30????????????Oracle VM Forum 2009????????????????2009?9?????????Oracle Database 11g Release 2??????????????????????????????????????????????????????????????????????????????????????????????????????? >???:???????????????????2(???????) ????????????????????????????????????????????????????????????????????????????2???????????(????????????????)????????????????????????????????????????????????????? >Oracle Database 11g Release 2???????? ?????????????????????????????????????????????????????????? ???????????

    Read the article

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

  • Oracle Database Smart Flash Cache: Only on Oracle Linux and Oracle Solaris

    - by sergio.leunissen
    Oracle Database Smart Flash Cache is a feature that was first introduced with Oracle Database 11g Release 2. Only available on Oracle Linux and Oracle Solaris, this feature increases the size of the database buffer cache without having to add RAM to the system. In effect, it acts as a second level cache on flash memory and will especially benefit read-intensive database applications. The Oracle Database Smart Flash Cache white paper concludes: Available at no additional cost, Database Smart Flash Cache on Oracle Solaris and Oracle Linux has the potential to offer considerable benefit to users of Oracle Database 11g Release 2 with disk-bound read-mostly or read-only workloads, through the simple addition of flash storage such as the Sun Storage F5100 Flash Array or the Sun Flash Accelerator F20 PCIe Card. Read the white paper.

    Read the article

  • Database Consolidation onto Private Clouds - updated for Oracle Database 12c

    - by B R Clouse
    One of our team's most popular white papers has been expanded and updated to discuss Oracle Database 12c.  Now available on our OTN page, the new version of Database Consolidation onto Private Clouds covers best practices for consolidation with pluggable databases that the new mulitenant architecture provides, and expanded information on the database and schema consolidation options.  These are the consolidation models the paper evaluates:   server  database  schema pluggable databases  Key considerations for consolidating workloads which the paper explores: Choosing a consolidation model How PDBs solve the IT complexity problem Isolation in consolidated environments Cloud pool design Complementary workloads Enterprise Manager 12c for consolidation planning and operations Many more white papers have been updated or are new for Oracle Database 12c. We'll continue to highlight those which tie directory to your journey to enterprise cloud.

    Read the article

  • JPA Inheritance and Relations - Clarification question

    - by Michael
    Here the scenario: I have a unidirectional 1:N Relation from Person Entity to Address Entity. And a bidirectional 1:N Relation from User Entity to Vehicle Entity. Here is the Address class: @Entity public class Address implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) privat Long int ... The Vehicles Class: @Entity public class Vehicle implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @ManyToOne private User owner; ... @PreRemove protected void preRemove() { //this.owner.removeVehicle(this); } public Vehicle(User owner) { this.owner = owner; ... The Person Class: @Entity @Inheritance(strategy = InheritanceType.JOINED) @DiscriminatorColumn(name="PERSON_TYP") public class Person implements Serializable { @Id protected String username; @OneToMany(cascade = CascadeType.ALL, orphanRemoval=true) @JoinTable(name = "USER_ADDRESS", joinColumns = @JoinColumn(name = "USERNAME"), inverseJoinColumns = @JoinColumn(name = "ADDRESS_ID")) protected List<Address> addresses; ... @PreRemove protected void prePersonRemove(){ this.addresses = null; } ... The User Class which is inherited from the Person class: @Entity @Table(name = "Users") @DiscriminatorValue("USER") public class User extends Person { @OneToMany(mappedBy = "owner", cascade = {CascadeType.PERSIST, CascadeType.REMOVE}) private List<Vehicle> vehicles; ... When I try to delete a User who has an address I have to use orphanremoval=true on the corresponding relation (see above) and the preRemove function where the address List is set to null. Otherwise (no orphanremoval and adress list not set to null) a foreign key contraint fails. When i try to delete a user who has an vehicle a concurrent Acces Exception is thrown when do not uncomment the "this.owner.removeVehicle(this);" in the preRemove Function of the vehicle. The thing i do not understand is that before i used this inheritance there was only a User class which had all relations: @Entity @Table(name = "Users") public class User implements Serializable { @Id protected String username; @OneToMany(mappedBy = "owner", cascade = {CascadeType.PERSIST, CascadeType.REMOVE}) private List<Vehicle> vehicles; @OneToMany(cascade = CascadeType.ALL) @JoinTable(name = "USER_ADDRESS", joinColumns = @JoinColumn(name = "USERNAME") inverseJoinColumns = @JoinColumn(name = "ADDRESS_ID")) ptivate List<Address> addresses; ... No orphanremoval, and the vehicle class has used the uncommented statement above in its preRemove function. And - I could delte a user who has an address and i could delte a user who has a vehicle. So why doesn't everything work without changes when i use inheritance? I use JPA 2.0, EclipseLink 2.0.2, MySQL 5.1.x and Netbeans 6.8

    Read the article

  • Database Server Hardware components (order of importance), CPU speed VS CPU cache vs RAM vs DISK

    - by nulltorpedo
    I am new to database world and would like to know what are crucial hardware specs when it comes to database performance. I have searched the internet and found this so far (In order of decreasing importance): 1) Hard Disk: Get an SSD basically (much more IOPS than spinners) 2) Memory: Get as much as you can afford 3) CPU: For the same $ spent, prefer larger cache size over speed. Are these findings sensible? EDIT: I would like to focus on CPU speed VS CPU cache size. EDIT2: The database is used to store some combination of ints and int arrays with few text fields. There are a lot of Select queries looking for existing entries. If entry is not found, then insert it. I would say most of processing would be trying to find a match across a table with 200 columns and 20k rows. The insert statements are very few. EDIT3: Also, we have a lot of views (basically select queries).

    Read the article

  • SYS2 Scripts Updated – Scripts to monitor database backup, database space usage and memory grants now available

    - by Davide Mauri
    I’ve just released three new scripts of my “sys2” script collection that can be found on CodePlex: Project Page: http://sys2dmvs.codeplex.com/ Source Code Download: http://sys2dmvs.codeplex.com/SourceControl/changeset/view/57732 The three new scripts are the following sys2.database_backup_info.sql sys2.query_memory_grants.sql sys2.stp_get_databases_space_used_info.sql Here’s some more details: database_backup_info This script has been made to quickly check if and when backup was done. It will report the last full, differential and log backup date and time for each database. Along with these information you’ll also get some additional metadata that shows if a database is a read-only database and its recovery model: By default it will check only the last seven days, but you can change this value just specifying how many days back you want to check. To analyze the last seven days, and list only the database with FULL recovery model without a log backup select * from sys2.databases_backup_info(default) where recovery_model = 3 and log_backup = 0 To analyze the last fifteen days, and list only the database with FULL recovery model with a differential backup select * from sys2.databases_backup_info(15) where recovery_model = 3 and diff_backup = 1 I just love this script, I use it every time I need to check that backups are not too old and that t-log backup are correctly scheduled. query_memory_grants This is just a wrapper around sys.dm_exec_query_memory_grants that enriches the default result set with the text of the query for which memory has been granted or is waiting for a memory grant and, optionally, its execution plan stp_get_databases_space_used_info This is a stored procedure that list all the available databases and for each one the overall size, the used space within that size, the maximum size it may reach and the auto grow options. This is another script I use every day in order to be able to monitor, track and forecast database space usage. As usual feedbacks and suggestions are more than welcome!

    Read the article

  • Learn to use PHP and Python with Oracle Database

    - by christopher.jones
    The Oracle Learning Library has posted up the latest "Oracle By Example" labs giving an introduction to PHP & Python with the Oracle Database : Using PHP with Oracle Database 11g - a basic introduction Developing a PHP Web Application with Oracle Database 11g - a Zend Framework application using the NetBeans IDE Using Python With Oracle Database 11g - a basic introduction Using the Django Framework with Python and Oracle Database 11g - a basic web application

    Read the article

  • Oracle Database Insider Now on LinkedIn

    - by Troy Kitch
    Our close friends over at the Oracle Database Insider blog have recently started a LinkedIn discussion group. Go behind the scenes of the latest Oracle Database announcements and discussions that include Oracle Database 11g and its options, such as Database Security, and the newest product, Oracle Exadata. Come on over to post a discussion topic, an event, ask questions and stay up-to-date on the latest Oracle Database information. We'll be there to join the discussions and answer questions. Join us on LinkedIn's latest group!

    Read the article

  • SQL SERVER – Spatial Database Queries – What About BLOB – T-SQL Tuesday #006

    - by pinaldave
    Michael Coles is one of the most interesting book authors I have ever met. He has a flair of writing complex stuff in a simple language. There are a very few people like that.  I really enjoyed reading his recent book, Expert SQL Server 2008 Encryption. I strongly suggest taking a look at it. This blog is written in response to T-SQL Tuesday #006: “What About BLOB? by Michael Coles. Spatial Database is my favorite subject. Since I did my TechEd India 2010 presentation, I have enjoyed this subject a lot. Before I continue this blog post, there are a few other blog posts, so I suggest you read them.  To help build the environment run the queries, I am going to present them in this single blog post. SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spatial Indexing This blog post explains the basics of Spatial Database and also provides a good introduction to Indexing concept. SQL SERVER – World Shapefile Download and Upload to Database – Spatial Database This blog post will enable you with how to load the shape file into database. SQL SERVER – Spatial Database Definition and Research Documents This blog post links to the white paper about Spatial Database written by Microsoft experts. SQL SERVER – Introduction to Spatial Coordinate Systems: Flat Maps for a Round Planet This blog post links to the white paper explaining coordinate system, as written by Microsoft experts. After reading the above listed blog posts, I am very confident that you are ready to run the following script. Once you create a database using the World Shapefile, as mentioned in the second link above,you can display the image of India just like the following. Please note that this is not an accurate political map. The boundary of this map has many errors and it is just a representation. You can run the following query to generate the map of India from the database spatial which you have created after following the instructions here. USE Spatial GO -- India Map SELECT [CountryName] ,[BorderAsGeometry] ,[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Now, let us find the longitude and latitude of the two major IT cities of India, Hyderabad and Bangalore. I find their values as the following: the values of longitude-latitude for Bangalore is 77.5833300000 13.0000000000; for Hyderabad, longitude-latitude is 78.4675900000 17.4531200000. Now, let us try to put these values on the India Map and see their location. -- Bangalore DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326).STBuffer(20000); -- Hyderabad DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326).STBuffer(20000); -- Bangalore and Hyderabad on Map of India SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation) <= 0 UNION ALL SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation1) <= 0 UNION ALL SELECT '',[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Now let us quickly draw a straight line between them. DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326).STBuffer(10000); DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326).STBuffer(10000); DECLARE @GeoLocation2 GEOGRAPHY SET @GeoLocation2 = GEOGRAPHY::STGeomFromText('LINESTRING(78.4675900000 17.4531200000, 77.5833300000 13.0000000000)',4326) SELECT name, [GeoLocation] FROM [IndiaGeoNames] I WHERE I.[GeoLocation].STDistance(@GeoLocation) <= 0 UNION ALL SELECT name, [GeoLocation] FROM [IndiaGeoNames] I1 WHERE I1.[GeoLocation].STDistance(@GeoLocation1) <= 0 UNION ALL SELECT '' name, @GeoLocation2 UNION ALL SELECT '',[Border] FROM [Spatial].[dbo].[Countries] WHERE Countryname = 'India' GO Let us use the distance function of the spatial database and find the straight line distance between this two cities. -- Distance Between Hyderabad and Bangalore DECLARE @GeoLocation GEOGRAPHY SET @GeoLocation = GEOGRAPHY::STPointFromText('POINT(78.4675900000 17.4531200000)',4326) DECLARE @GeoLocation1 GEOGRAPHY SET @GeoLocation1 = GEOGRAPHY::STPointFromText('POINT(77.5833300000 13.0000000000)',4326) SELECT @GeoLocation.STDistance(@GeoLocation1)/1000 'KM'; GO The result of above query is as displayed in following image. As per SQL Server, the distance between these two cities is 501 KM, but according to what I know, the distance between those two cities is around 562 KM by road. However, please note that roads are not straight and they have lots of turns, whereas this is a straight-line distance. What would be more accurate is the distance between these two cities by air travel. When we look at the air travel distance between Bangalore and Hyderabad, the total distance covered is 495 KM, which is very close to what SQL Server has estimated, which is 501 KM. Bravo! SQL Server has accurately provided the distance between two of the cities. SQL Server Spatial Database can be very useful simply because it is very easy to use, as demonstrated above. I appreciate your comments, so let me know what your thoughts and opinions about this are. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database

    Read the article

  • Webcast - Oracle Database In-Memory Option

    - by Thanos Terentes Printzios
    Next to the recent announcement by Larry Ellison on the Future of the Database, we are happy to share this exclusive series of live webcasts from Oracle Database Product Management, where you can learn more about the brand new Oracle Database 12c In-Memory option. Oracle Database In-Memory is Oracle’s new memory-optimized technology that transparently accelerates analytic, data warehousing, and reporting workloads, while also accelerating transaction processing (OLTP) workloads. Participants will learn about Oracle Database In-Memory benefits, features, and leading edge architecture.  The Database In-Memory architecture provides the ability to easily process data orders of magnitude faster by simply enabling the feature and identifying tables to bring in-memory without application changes. Details on Oracle Database In-Memory’s ease of use and management, scalability, and availability will also be covered. Please join us to learn more about Oracle Database In-Memory and get first-hand knowledge of this important new feature. Delivery Format This FREE online LIVE eSeminar will be delivered over the Web.These Oracle webcasts are FREE for Customers, System Integrators, ISVs, VARs and Platform Partners. Presenter: Richard Jacobs, Oracle Solution Architect  Europe Webcast 1 Date: August 29, 2014 @ 10:00 am to 11:00 am Central European Summer Time (CEST)Register Here! Europe Webcast 2 Date: September 29, 2014 @ 10:00 am to 11:00 am Central European Summer Time (CEST)Register Here!

    Read the article

  • Should I comment Tables or Columns in my database?

    - by jako
    I like to comment my code with various information, and I think most people nowadays do so while writing some code. But when it comes to database tables or columns, I have never seen anyone setting some comments, and, to be honest, I don't even think of looking for comments there. So I am wondering if some people are commenting their DB strcuture here, and if I should bother commenting, for instance when I create a new column to an existing table?

    Read the article

  • ASP.NET Website Administration Tool: Unable to connect to SQL Server database

    - by MedicineMan
    I am trying to get authentication and authorization working with my ASP MVC project. I've run the aspnet_regsql.exe tool without any problem and see the aspnetdb database on my server (using the Management Studio tool). my connection string in my web.config is: <connectionStrings> <add name="ApplicationServices" connectionString="data source=MYSERVERNAME;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" /> The error I get is: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: Unable to connect to SQL Server database. In the past, I have had trouble connecting to my database because I've needed to add users. Do I have to do something similar here?

    Read the article

  • Firebird database corruption causes

    - by Rytis
    I am running several different Firebird versions (2.0, 2.1) on multiple entry level Windows-based servers with wildly varying hardware. The only matching thing between them is that they are running same home built application with the same database structure. Lately I've been seeing massive slowdowns on multiple servers. Turns out that database gets corrupted, so each time it breaks, I get to mend, backup and restore the database, and it all is fine for some time (1-2 weeks), and then it repeats once again. Thankfully, I haven't seen any data loss or damage... yet. The thing is that every such downtime results in lost productivity, and often quite some driving for me as some of the databases are in remote locations. I've been trying to find out what's causing the corruption, but I haven't been able to. The fact that it's running on different hardware hints that it should not be a hardware based problem. If we rule out hardware issues, I have a bad feeling that it's a bug in Firebird as I'm not doing anything fancy via SQL. Do you have any idea how to find out exactly what's causing the corruption and hopefully fix the problem?

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • What is the best way to do testing database (MYSQL spesific)

    - by justjoe
    Right now i'm on testing something in a database. It's a wordpress database. i have to write and delete and do other operation on it. As you know it, it has indexing mechanism that will always make every new post inherit the next highest possible ID. Please consider that this database is a copying of used database. it has been written before. So, i will need to make sure when i finish my testing, it will be the same Right now, my only solution is making backup. So if i have end in some section of planned testing, i will backup it and start next testing on another copy of it. Fortunately, the size of database is only a small one. so delete and copy and backup it will be easy. but i know this way of database testing is only partial solution.It force me to create too many backup copy. I don't know what i will do if the database has bigger size. it will be a very long of testing nightmare. so i wonder is there any solution that work just like rollback. So it will just lock the database and just put new entry as some kind of cache. I can erase it or write it into the database. i use mysql and phpmyadmin and use it to developed some custom solution. EDIT ::: How to effectively doing testing on database when developing PHP solution ?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >