Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 79/1255 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • c# reading integer fields from database, returning empty string when reading integer type field

    - by arnoldino
    what is wrong with this code? field="id"; table="MInvMonth"; condition="machine_id=37"; public static String getConditionedField(String field, String table, String condition) try { if (cmd == null) getConnection(); cmd.CommandText = "Select " + field + " from " + table + " where " + condition; SQLiteDataReader reader = cmd.ExecuteReader(); if (reader.HasRows==true) { reader.Read(); string s = reader[0].ToString(); // return first element reader.Close(); return s; } reader.Close(); return null; } catch (Exception e) { MessageBox.Show("Caught exception: " + e.Message+"|"+cmd.CommandText); return null; } I checked the sql statement, it turns the right value. why can't I read it? the returnvalue is "".

    Read the article

  • What relational database innovations have there been in the last 10 years

    - by Simon Munro
    The SQL implementation of relational databases has been around in their current form for something like 25 years (since System R and Ingres). Even the main (loosely adhered to) standard is ANSI-92 (although there were later updates) is a good 15 years old. What innovations can you think of with SQL based databases in the last ten years or so. I am specifically excluding OLAP, Columnar and other non-relational (or at least non SQL) innovations. I also want to exclude 'application server' type features and bundling (like reporting tools) Although the basic approach has remained fairly static, I can think of: Availability Ability to handle larger sets of data Ease of maintenance and configuration Support for more advanced data types (blob, xml, unicode etc) Any others that you can think of?

    Read the article

  • browse data in Android SQLite Database

    - by Justin C
    Is there a way for an Android user to browse the SQLite databases on his/her phone and view the data in the databases? I use the SoftTrace beta program a lot. It's great but has no way that I can find to download the data it tracks to a PC. Thanks

    Read the article

  • Source Controlled Database Data import strategies.

    - by H. Abraham Chavez
    So I've gotten a project and got the db team sold on source control for the db (weird right?) anyway, the db already exists, it is massive, and the application is very dependent on the data. The developers need up to three different flavors of the data to work against when writing SPROCs and so on. Obviously I could script out data inserts. But my question is what tools or strategies do you use to build a db from source control and populate it with multiple large sets of data?

    Read the article

  • Database warehoue design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to see some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • Auto Reconnect of Database Connection

    - by wesley
    I have a DBCP connection pool in Tomcat. The problem is that when the connection is lost briefly the appliction is broken because DBCP won't try to reconnect again later when there is a connection. Can I get DBCP to reconnect automatically?

    Read the article

  • Database best practices

    - by user270797
    I have a table which stores comments, the comment can either come from another user, or another profile which are separate entities in this app. My original thinking was that the table would have both user_id and profile_id fields, so if a user submits a comment, it gives the user_id leaves the profile_id blank is this right, wrong, is there a better way?

    Read the article

  • mysql database normalization question

    - by Chocho
    here is my 3 tables: table 1 -- stores user information and it has unique data table 2 -- stores place category such as, toronto, ny, london, etc hence this is is also unique table 3 -- has duplicate information. it stores all the places a user have been. the 3 tables are linked or joined by these ids: table 1 has an "table1_id" table 2 has an "table2_id" and "place_name" table 3 has an "table3_id", "table1_id", "place_name" i have an interface where an admin sees all users. beside a user is "edit" button. clicking on that edit button allows you to edit a specific user in a form fields which has a multiple drop down box for "places". if an admin edits a user and add 1 "places" for the user, i insert that information using php. if the admin decides to deselect that 1 "places" do i delete it or mark it as on and off? how about if the admin decides to select 2 "places" for the user; change the first "places" and add an additional "places". will my table just keep growing and will i have just redundant information? thanks.

    Read the article

  • VS2008 adding SQL Server Database (SQL Server 2008 Mgmt Studio) not working

    - by Kahn
    I'm trying to practice using the ASP.Net MVC at home, but I ran into an impossible problem. I cannot open a connection to SQL Server 2008, I get this error: "Connections to SQL Server files (*.mdf) require SQL Server Express 2005 to function properly. ..." I've googled around for numerous responses, none of them either working or addressing this issue. I'm running Vista 32bit, my SQL Server 2008 Mgmt Studio is also 32bit, I have SP1 installed both on VS2008 Professional, as well as the SQL Server. I changed the machine.config connectionStrings from ./SQLExpress to my SQL Server 2008 name. Now if I connect manually through web.config, in an asp:datasource or code-behind, everything works fine. But for some reason trying to add a DB Connection directly like this always gets the error. This is pretty fatal, since I can't rightly do much unless I can use LINQ to SQL with my MVC test project, and this is the only way I know how. Worked fine in school and work, but not at home. Installing SQL Server Express 2005, as some have suggested, is not an option. Obviously it HAS to work with SQL Server 2008. Thanks in advance.

    Read the article

  • Reverse engineer an ORM

    - by Oren Mazor
    Given a [mysql] database with a given schema, is it possible to generate an ORM interface for it? doesn't matter if its php, python or perl. in other words, I have a database and I have to ask it a few questions. while I could just craft a bunch of SQL queries (okay, several dozen), this is way more interesting to think about. is this a valid question, even? I have no design background with ORMs, but I've used a few (django's, symfony's, etc).

    Read the article

  • Getting-started: Setup Database for Node.js

    - by Emile Petrone
    I am new to node.js but am excited to try it out. I am using Express as a web framework, and Jade as a template engine. Both were easy to get setup following this tutorial from Node Camp. However the one problem I am finding is I can't find a simple tutorial for getting a DB set up. I am trying to build a basic chat application (store session and message). Does anyone know of a good tutorial? This other SO post talks about dbs to use- but as this is very different from the Django/MySQL world I've been in, I want to make sure I understand what is going on. Thanks!

    Read the article

  • String Field Sizes for unicode database fields using different data access components

    - by Serg
    mjustin in his question 1 and question 2 says that TWideStringField.Size property for UTF8 fields in Delphi 2009 dbExpress is 4 times larger than the logical field size (max number of characters in the field). I inclined to consider this a dbExpress bug. That is what Delphi 2009 Help says: The interpretation of Size depends on the data type. The meaning of Size for data types that use it is given in the following table. For all other data types, Size is not used and its value is always 0. ftString - Size is the maximum number of characters in the string. I am using FibPlus 6.9.9 and it follows the above documentation - the string field size is the maximum number of characters, not bytes. So the question also implies the following question: Are DbExpress drivers in Delphi 2009 unusable for unicode databases?

    Read the article

  • Database Design Composite Keys

    - by guazz
    I am going to use a contrived example: one headquarter has one-or-many contacts. A contact can only belong to one headquarter. TableName = Headquarter Column 0 = Id : Guid [PK] Column 1 = Name : nvarchar(100) Column 2 = IsAnotherAttribute: bool TableName = ContactInformation Column 0 = Id : Guid [PK] Column 1 = HeadquarterId: Guid [FK] Column 2 = AddressLine1 COlumn 3 = AddressLine2 Column 4 = AddressLine3 I would like some help setting the table primary keys and foreign keys here? How does the above look? Should I use a composite key for ContactInformation on [Column 0 and Column1]? Is it ok to use surrogate key all of the time?

    Read the article

  • Bus Timetable database design

    - by paddydub
    hi, I'm trying to design a db to store the timetable for 300 different bus routes, Each route has a different number of stops and different times for Monday-Friday, Saturday and Sunday. I've represented the bus departure times for each route as follows, I'm not sure if i should have null values in the table, does this look ok? route,Num,Day, t1, t2, t3, t4 t5 t6 t7 t8 t9 t10 117, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 null null null 117, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 null null null 117, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 null null null 117, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 null null null . . . 117, 20, Monday, 9:39, 10.09, 11:39, 12:39, 14:39 18:39 19:39 null null null 119, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 20:00 21:00 22:00 119, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 20:03 21:03 22:03 119, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 20:06 21:06 22:06 119, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 20:09 21:09 22:09 . . . 119, 37, Monday, 9:49, 9:59, 11:59, 12:59, 14:59 18:59 19:59 20:59 21:59 22:59 139, 1, Sunday, 9:00, 9:30, 20:00 21:00 22:00 null null null null null 139, 2, Sunday, 9:03, 9:33, 20:03 21:03 22:03 null null null null null 139, 3, Sunday, 9:06, 9:36, 20:06 21:06 22:06 null null null null null 139, 4, Sunday, 9:09, 9:39, 20:09 21:09 22:09 null null null null null . . . 139, 20, Sunday, 9:49, 9:59, 20:59 21:59 22:59 null null null null null

    Read the article

  • close fails on database connections (managed connection cleanup fails) in websphere 7 but not in web

    - by mete
    I have a simple method (used in a web application through servlets) that gets a connection from a JNDI name and issues a select statement (get connection, issue select, return result, close the connection etc. in finally). Due to other methods in the application the connection is set as autocommit=false. This method works normally in websphere 6.1 as well as in glassfish and weblogic. However, in websphere 7, it receives cleanup failed error when I close the connection because, it says, the connection is still in a transaction. Because I was not updating anything I did not commit or rollback the connection in this method (which can be wrong). If I add commit before closing the connection, it works. My question is why it works in websphere 6.1 (and other containers) and why not in websphere 7 ? What can be the cause of this difference ?

    Read the article

  • Does a version control database storage engine exist?

    - by Zak
    I was just wondering if a storage engine type existed that allowed you to do version control on row level contents. For instance, if I have a simple table with ID, name, value, and ID is the PK, I could see that row 354 started as (354, "zak", "test")v1 then was updated to be (354, "zak", "this is version 2 of the value")v2 , and could see a change history on the row with something like select history (value) where ID = 354. It's kind of an esoteric thing, but it would beat having to keep writing these separate history tables and functions every time a change is made...

    Read the article

  • Why would you want a case sensitive database?

    - by Khorkrak
    What are some reasons for choosing a case sensitive collation over a case insensitive one? I can see perhaps a modest performance gain for the DB engine in doing string comparisons. Is that it? If your data is set to all lower or uppercase then case sensitive could be reasonable but it's a disaster if you store mixed case data and then try to query it. You have then apply a lower() function on the column so that it'll match the corresponding lower case string literal. This prevents index usage in every dbms. So wondering why anyone would use such an option.

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \server, and it's listening on port 1234. When \client tries to connect to \server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \server:1234\tables, I get error 6097, bad IP address specified in the connection path. \server is pingable from \client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron.

    Read the article

  • mysql partitioning

    - by Yang
    just want to verify that database partition is implemented only at the database level, when we query a partitioned table, we still do our normal query, nothing special with our queries, the optimization is performed automatically when parsing the query, is that correct? e.g. we have a table called 'address' with a column called 'country_code' and 'city'. so if i want to get all the addresses in New York, US, normally i wound do something like this: select * from address where country_code = 'US' and city = 'New York' if now the table is partitioned by 'country_code', and i know that now the query will only be executed on the partition which contains country_code = US. My question is do I need to explicitly specify the partition to query in my sql statement? or i still use the previous statement and the db server will optimize it automatically? Thanks in advance!

    Read the article

  • Tag, comment, rating, etc. database design

    - by Efe Kaptan
    I want to implement modules such as comment, rating, tag, etc. to my entities. What I thought was: comments_table - comment_id, comment_text entity1 - entitity1_id, entity1_text entity2 - entitity2_id, entity2_text entity1_comments - entity1_id, comment_id entity2_comments - entity2_id, comment_id .... Is this approach correct?

    Read the article

  • Facebook database design?

    - by Marin
    I have always wondered how Facebook designed the friend <- user relation. I figure the user table is something like this: user_email PK user_id PK password I figure the table with user's data (sex, age etc connected via user email I would assume). How does it connect all the friends to this user? Something like this? user_id friend_id_1 friend_id_2 friend_id_3 friend_id_N Probably not. Because the number of users is unknown and will expand.

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >