Search Results

Search found 57000 results on 2280 pages for 'alter database open'.

Page 90/2280 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Are these tables respect the 3NF Database Normalization?

    - by penas
    AUTHOR table Author_ID, PK First_Name Last_Name TITLES table TITLE_ID, PK NAME Author_ID, FK DOMAIN table DOMAIN_ID, PK NAME TITLE_ID, FK READERS table READER_ID, PK First_Name Last_Name ADDRESS CITY_ID, FK PHONE CITY table CITY_ID, PK NAME BORROWING table BORROWING_ID,pk READER_ID, fk TITLE_ID, fk DATE HISTORY table READER_ID TITLE_ID DATE_OF_BORROWING DATE_OF_RETURNING Are these tables respect the 3NF Database Normalization? What if 2 authors work together for the same title? The column Addresss should have it's own table? When a reader borrows a book, I make an entry in BORROWING table. After he returns the book, I delete that entry and I make another one entry in HISTORY table. Is this a good idea? Do I brake any rule? Should I have instead one single BORROWING table with a DATE_OF_RETURNING column?

    Read the article

  • Tutorials for C# database app

    - by ChrisC
    My question and comments here http://stackoverflow.com/questions/2471198/what-is-ado-net shows my level of knowledge about c# database apps. Can someone point me to tutorials and/or primers that can help me proceed from where I am (also stated in other question)? I've looked and all I've found are tutorials that talk about general db basics (ie, not helping with VS/C#), or talk about connecting to existing SQL db's. I need help setting one up and configuring it, as well as help on how to create and use queries to test the db schema.

    Read the article

  • Best XML Based Database

    - by monmonja
    I had been assigned to develop a system on where we would get a XML from multiple sources (millions of xml) and put them in some database like and judging from the xml i would receive, there wont be any concrete structure even if they are from the same source. With this reason i think i cannot suggest RDMS and currently looking at NoSQL databases. We need a system that could do CRUD and is fast on Read. I had been looking at MarkLogic and eXist, which are both XML based NoSQL databases, have anyone had experience with them? and any other suggestion? Thanks

    Read the article

  • Ways to save enums in database

    - by corgrath
    Hey guys. I am wondering what the best ways to save enums into a database is. I know there are name() and valueOf() methods to make it into a String back. But are there any other (flexible) options to store these values? Is there a smart way to make them into unique numbers (ordinal() is not safe to use)? Any comments and suggestions would be helpful :) Update: Thanks for all awesome and fast answers! It was as I suspected. However a note to 'toolkit'; That is one way. The problem is that I would have to add the same methods with each enum type i create. Thats a lot of duplicated code and, at the moment, Java does not support any solutions to this (You cannot let enum extend other classes). However, thanks for all answers!

    Read the article

  • Database on the fly with scripting languages

    - by afilatun
    I have a set of .csv files that I want to process. It would be far easier to process it with SQL queries. I wonder if there is some way to load a .csv file and use SQL language to look into it with a scripting language like python or ruby. Loading it with something similar to ActiveRecord would be awesome. The problem is that I don't want to have to run a database somewhere prior to running my script. I souldn't have additionnal installations needed outside of the scripting language and some modules. My question is which language and what modules should I use for this task. I looked around and can't find anything that suits my need. Is it even possible?

    Read the article

  • Export a SQL database into a CSV file and use it with WEKA

    - by Simon
    How can I export a query result from a .sql database into a .csv file? I tried with SELECT * FROM players INTO OUTFILE 'players.csv' FIELDS TERMINATED BY ',' LINES TERMINATED BY ';';` and my .csv file is something like: p1,1,2,3 p2,1,4,5 But they are not in saparated columns, all are in 1 column. I tried to create a .csv file by myself just to try WEKA, something like: p1 1 2 3 p2 1 4 5 But WEKA recognizes p1 1 2 3 as a single attribute. So: how can I export correctly a table from a sql db to a csv file? And how can I use it with WEKA?

    Read the article

  • Import Paradox table in Access database: Incorrect collating sequence

    - by waanders
    Hello, I have a Paradox 5.0 database and want to migrate it to Access 2007. But if I try to import a Paradox table, Access gives an error message: "Incorrect collating sequence". The Help says: "You tried to link a Paradox table that was created with an international sort order that is not the same as the one you are using". What am I doing wrong? How can a change that sort order? Anybody? Thanks in advance.

    Read the article

  • Best way to store enum values in database - String or Int

    - by inutan
    Hello there, I have a number of enums in my application which are used as property type in some classes. What is the best way to store these values in database, as String or Int? FYI, I will also be mapping these attribute types using fluent Nhibernate. Sample code: public enum ReportOutputFormat { DOCX, PDF, HTML } public enum ReportOutputMethod { Save, Email, SaveAndEmail } public class ReportRequest { public Int32 TemplateId { get { return templateId; } set { templateId = value; } } public ReportOutputFormat OutputFormat { get { return outputFormat; } set { outputFormat = value; } } public ReportOutputMethod OutputMethod { get { return outputMethod; } set { outputMethod = value; } } }

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \server, and it's listening on port 1234. When \client tries to connect to \server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \server:1234\tables, I get error 6097, bad IP address specified in the connection path. \server is pingable from \client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron.

    Read the article

  • Using Spotlight as the "database" of an application

    - by vicvicvic
    I'm developing an OS X application to organize "things" (as iTunes is to music and iPhoto to photos). Instead of having my own database and index, I'm considering using Spotlight to essentially serve this purpose. Has anyone tried this? Is it wise? The main benefit, as I see it, would be simplicity and avoiding redundancy. It seems a bit wasteful to implement my own index machinery when OS X comes with one built in. I have little experience working with Spotlight, however. From a user's perspective, I do know that it has been slow and imprecise in older versions of OS X. I also have a gut-feeling that since it's aimed at searching the whole filesystem, using it for "local" purposes becomes hackish. Obviously, my applications's index needs to constantly be up-to-date. Can mdimport be used for this?

    Read the article

  • VisualBasic.net Database Boiler Plate

    - by Shiftbit
    Is there any built in .net Classes to assist in the reduction of boiler plate code? I have numerous database operations going on and I find that I am reproducing the connection, command, transaction and occassianlly data set. I am aware of the Java Related Question, however, the solutions pertain to Java. I was wondering if anyone was aware of a .net solution? http://stackoverflow.com/questions/1072925/remove-boilerplate-from-db-code Public Sub ReadData(ByVal connectionString As String) Dim queryString As String = "SELECT EmpNo, EName FROM Emp" Using connection As New OracleConnection(connectionString) Dim command As New OracleCommand(queryString, connection) connection.Open() Using reader As OracleDataReader = command.ExecuteReader() ' Always call Read before accessing data. While reader.Read() Console.WriteLine(reader.GetInt32(0).ToString() + ", " _ + reader.GetString(1)) End While End Using End Using End Sub MSDN

    Read the article

  • Access Database connect C# local director

    - by Bomboe Cristian
    I want my connection to the database to be available all the time, so if i move the folder with the project, to an other computer, the connection to be made automaticaly. So, how can i change this connection: this.oleDbConnection1.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=\"C:\\Documents and Settings\\Cristi\\Do" + "cuments\\Visual Studio 2008\\Projects\\WindowsApplication3\\bd1.mdb\""; ??? It should read the project directory or something. I don't know. Any ideas? Thank You!

    Read the article

  • Imitate database in C

    - by Mohit Deshpande
    I am fairly new to C. (I have good knowledge of C# [Visual Studio] and Java [Eclipse]) I want to make a program that stores information. My first instinct was to use a database like SQL Server. But I don't think that it is compatible with C. So now I have two options: Create a struct (also typedef) containing the data types. Find a way to integrate SQLite through a C header file Which option do you think is best? Or do you have another option? I am kind of leaning toward making a struct with a typedef, but could be pursuaded to change my mind.

    Read the article

  • Best way to access database from android

    - by Brandon Delany
    I am working on a Android app and I have a dilemma. I have a list of Objects. I have to update each of these objects with a database. I have 2 methods: Method 1: I can loop through the Objects. For each object I can connect to the server, update it, and then move on to the next Object, and so forth. Method 2: I can store the Objects in a list, send the whole list to the server, update it on the server side, then return a list of updated objects. My questions are: Which method is faster? Which method is easier on the phone's battery? By the way, Method 1 is easier for me to code :). Thank you.

    Read the article

  • Graphical database monitoring tool for debugging

    - by salle55
    I would love a tool that in real-time showed changes in a set of predefined tables in a graphical way, for example different colors on fields that has changed value, added records, deleted records etc. I don't want a list of all transactions (like SQL Server Profiler), instead a clever visualized more graphical approach where you can get a great overview if you are just monitoring a few tables. I realize the visualization would be hard if there is a lot of transactions against the database, but with monitoring on a few tables and a single session during debugging it would be possible. Does something like this exist? I think it would be great for debugging! Preferably for SQL Server and/or MySQL.

    Read the article

  • Database query optimization

    - by hdx
    Ok my Giant friends once again I seek a little space in your shoulders :P Here is the issue, I have a python script that is fixing some database issues but it is taking way too long, the main update statement is this: cursor.execute("UPDATE jiveuser SET username = '%s' WHERE userid = %d" % (newName,userId)) That is getting called about 9500 times with different newName and userid pairs... Any suggestions on how to speed up the process? Maybe somehow a way where I can do all updates with just one query? Any help will be much appreciated! PS: Postgres is the db being used.

    Read the article

  • Three level database - foreign keys

    - by poke
    I have a three level database with the following structure (simplified to only show the primary keys): Table A: a_id Table B: a_id, b_id Table C: a_id, b_id, c_id So possible values for table C would be something like this: a_id b_id c_id 1 1 1 1 1 2 1 1 3 1 2 1 1 2 2 2 1 1 2 2 1 2 2 2 ... I am now unsure, how foreign keys should be set; or if they should be set for the primary keys at all. My idea was to have a foreign key on table B B.a_id -> A.a_id, and two foreign key on C C.a_id -> A.a_id and ( C.a_id, C.b_id ) -> ( B.a_id, B.b_id ). Is that the way I should set up the foreign keys? Is the foreign key from C->A necessary? Or do I even need foreign keys at all given that all those columns are part of the primary keys? Thanks.

    Read the article

  • Php PEAR, Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a special-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: 1.) Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? 2.) Can PEAR interface with the MySqli prepared statements I currently use? 3.) Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) 4.) Will it slow down my site once I have a significantly large member base?

    Read the article

  • How can I alter a temp table?

    - by William
    I need to create a temp table, than add a new int NOT NULL AUTO_INCREMENT field to it so I can use the new field as a row number. Whats wrong with my query? SELECT post, newid FROM ((SELECT post`test_posts`) temp ALTER TABLE temp ADD COLUMN newid int NOT NULL AUTO_INCREMENT)

    Read the article

  • Oracle Stored Procedure with Alter command

    - by Will
    Hello, I am trying to build an oracle stored procedure which will accept a table name as a parameter. The procedure will then rebuild all indexes on the table. My problem is I get an error while using the ALTER command from a stored procedure, as if PLSQL does not allow that command.

    Read the article

  • Swap unique indexed column values in database.

    - by Ramesh Soni
    I have a database table and one of the fields (not primary key) is having unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hack I know are: Delete both rows and re-insert them Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out?

    Read the article

  • c# - pull records from database without timeout

    - by BhejaFry
    Hi folks, i have a sql query with multiple joins & it pulls data from a database for processing. This is supposed to be running on some scheduled basis. So day 1, it might pull 500, day 2 say 400. Now, if the service is stopped for some reason & the data not processed, then on day3 there could be as much as 1000 records to process. This is causing timeout on the sql query. How best to handle this situation without causing timeout & gradually reducing workload to process? TIA

    Read the article

  • MySQL 5.1 / phpMyAdmin - logging CREATE/ALTER statements

    - by pako
    Is it possible to log CREATE / ALTER statements issued on a MySQL server through phpMyAdmin? I heard that it could be done with a trigger, but I can't seem to find suitable code anywhere. I would like to log these statements to a table, preferably with the timestamp of when they were issued. Can someone provide me with a sample trigger that would enable me to accomplish this? I would like to log these statements so I can easily synchronize the changes with another MySQL server.

    Read the article

  • Major performance difference between two Oracle database instances

    - by jrdioko
    I am working with two instances of an Oracle database, call them one and two. two is running on better hardware (hard disk, memory, CPU) than one, and two is one minor version behind one in terms of Oracle version (both are 11g). Both have the exact same table table_name with exactly the same indexes defined. I load 500,000 identical rows into table_name on both instances. I then run, on both instances: delete from table_name; This command takes 30 seconds to complete on one and 40 minutes to complete on two. Doing INSERTs and UPDATEs on the two tables has similar performance differences. Does anyone have any suggestions on what could have such a drastic impact on performance between the two databases?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >