Search Results

Search found 30270 results on 1211 pages for 'database diagramming'.

Page 67/1211 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Access Database connect C# local director

    - by Bomboe Cristian
    I want my connection to the database to be available all the time, so if i move the folder with the project, to an other computer, the connection to be made automaticaly. So, how can i change this connection: this.oleDbConnection1.ConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=\"C:\\Documents and Settings\\Cristi\\Do" + "cuments\\Visual Studio 2008\\Projects\\WindowsApplication3\\bd1.mdb\""; ??? It should read the project directory or something. I don't know. Any ideas? Thank You!

    Read the article

  • Imitate database in C

    - by Mohit Deshpande
    I am fairly new to C. (I have good knowledge of C# [Visual Studio] and Java [Eclipse]) I want to make a program that stores information. My first instinct was to use a database like SQL Server. But I don't think that it is compatible with C. So now I have two options: Create a struct (also typedef) containing the data types. Find a way to integrate SQLite through a C header file Which option do you think is best? Or do you have another option? I am kind of leaning toward making a struct with a typedef, but could be pursuaded to change my mind.

    Read the article

  • Best way to access database from android

    - by Brandon Delany
    I am working on a Android app and I have a dilemma. I have a list of Objects. I have to update each of these objects with a database. I have 2 methods: Method 1: I can loop through the Objects. For each object I can connect to the server, update it, and then move on to the next Object, and so forth. Method 2: I can store the Objects in a list, send the whole list to the server, update it on the server side, then return a list of updated objects. My questions are: Which method is faster? Which method is easier on the phone's battery? By the way, Method 1 is easier for me to code :). Thank you.

    Read the article

  • Graphical database monitoring tool for debugging

    - by salle55
    I would love a tool that in real-time showed changes in a set of predefined tables in a graphical way, for example different colors on fields that has changed value, added records, deleted records etc. I don't want a list of all transactions (like SQL Server Profiler), instead a clever visualized more graphical approach where you can get a great overview if you are just monitoring a few tables. I realize the visualization would be hard if there is a lot of transactions against the database, but with monitoring on a few tables and a single session during debugging it would be possible. Does something like this exist? I think it would be great for debugging! Preferably for SQL Server and/or MySQL.

    Read the article

  • Database query optimization

    - by hdx
    Ok my Giant friends once again I seek a little space in your shoulders :P Here is the issue, I have a python script that is fixing some database issues but it is taking way too long, the main update statement is this: cursor.execute("UPDATE jiveuser SET username = '%s' WHERE userid = %d" % (newName,userId)) That is getting called about 9500 times with different newName and userid pairs... Any suggestions on how to speed up the process? Maybe somehow a way where I can do all updates with just one query? Any help will be much appreciated! PS: Postgres is the db being used.

    Read the article

  • Php PEAR, Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a special-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: 1.) Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? 2.) Can PEAR interface with the MySqli prepared statements I currently use? 3.) Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) 4.) Will it slow down my site once I have a significantly large member base?

    Read the article

  • Three level database - foreign keys

    - by poke
    I have a three level database with the following structure (simplified to only show the primary keys): Table A: a_id Table B: a_id, b_id Table C: a_id, b_id, c_id So possible values for table C would be something like this: a_id b_id c_id 1 1 1 1 1 2 1 1 3 1 2 1 1 2 2 2 1 1 2 2 1 2 2 2 ... I am now unsure, how foreign keys should be set; or if they should be set for the primary keys at all. My idea was to have a foreign key on table B B.a_id -> A.a_id, and two foreign key on C C.a_id -> A.a_id and ( C.a_id, C.b_id ) -> ( B.a_id, B.b_id ). Is that the way I should set up the foreign keys? Is the foreign key from C->A necessary? Or do I even need foreign keys at all given that all those columns are part of the primary keys? Thanks.

    Read the article

  • Swap unique indexed column values in database.

    - by Ramesh Soni
    I have a database table and one of the fields (not primary key) is having unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hack I know are: Delete both rows and re-insert them Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out?

    Read the article

  • c# - pull records from database without timeout

    - by BhejaFry
    Hi folks, i have a sql query with multiple joins & it pulls data from a database for processing. This is supposed to be running on some scheduled basis. So day 1, it might pull 500, day 2 say 400. Now, if the service is stopped for some reason & the data not processed, then on day3 there could be as much as 1000 records to process. This is causing timeout on the sql query. How best to handle this situation without causing timeout & gradually reducing workload to process? TIA

    Read the article

  • Major performance difference between two Oracle database instances

    - by jrdioko
    I am working with two instances of an Oracle database, call them one and two. two is running on better hardware (hard disk, memory, CPU) than one, and two is one minor version behind one in terms of Oracle version (both are 11g). Both have the exact same table table_name with exactly the same indexes defined. I load 500,000 identical rows into table_name on both instances. I then run, on both instances: delete from table_name; This command takes 30 seconds to complete on one and 40 minutes to complete on two. Doing INSERTs and UPDATEs on the two tables has similar performance differences. Does anyone have any suggestions on what could have such a drastic impact on performance between the two databases?

    Read the article

  • sql Database to save different contact details for a message sending site

    - by jagan
    I am working for a project to create a database for saving different persons contact details in sql. For example, X person saves 10 contacts, Y persons save 15 contacts, z persons save 20 contacts and so on. I cant create seperate tables to save contacts of x,y,z and so on. But i just want to know the alternative method to do that. Is there any easy method to save different cotacts and is there any easy method to retrieve it. I'm just a student, i don't know much about sql and don't have much experience in this. So i need your help to know much about this.

    Read the article

  • How would you implement database updates via email?

    - by jules
    I'm building a public website which has its own domain name with pop/smtp mail services. I'm considering giving users the option to update their data via email - something similar to the functionality found in Flickr or Blogger where you email posts to a special email address. The email data is then processed and stored in the underlying database for the website. I'm using ASP.NET and SQL Server and using a shared hosting service. Any ideas how one would implement this, or if it's even possible using shared hosting? Thanks

    Read the article

  • Visual Studio Website: Can't create an SQL Database!

    - by Andreas
    Hi, I'm using Visual Studio 2008 SP1 with SQL Server 2008. I'am trying to add an SQL Server File (MDF) in my Website project. Then I get the following error: Connections to SQL Server files (*.mdf) require SQL Server Express 2005 to function properly. Please verify... I've been using Google without any results, and I'm in deep need for help.. I've tried the following things to fix it, without succes: Changing instance names so they should fit Attaching the database in the management studio Uninstall/Install Visual Studio Uinstall/Install SQL Server 2005 AND 2008 All in all, this is a REALLY annoying error and it just should work..

    Read the article

  • get value from MySQL database with PHP

    - by Hristo
    $from = $_POST['from']; $to = $_POST['to']; $message = $_POST['message']; $query = "SELECT * FROM Users WHERE `user_name` = '$from' LIMIT 1"; $result = mysql_query($query); while($row = mysql_fetch_array($result, MYSQL_ASSOC)) { $fromID = $row['user_id']; } I'm trying to have $formID be the user_id for a user in my database. Each row in the Users table is like: user_id | user_name | user_type 1 | Hristo | Agent So I want $from = 1 but the above code isn't working. Any ideas why?

    Read the article

  • Wordpress Database SQL query help needed

    - by i-CONICA
    Hi, I've written a PHP script to access the latest item from the wordpress database, which it does. But I need to use it twice, once for the latest item from a specific category, and another from a differerent category... But right now I cannot figure out how to put the query together. The post has a post_parent, which in another table, called wp_term_relationships, is referred to as object_id, and has a term_taxonomy_id, which then relates to a different table, called wp_terms where the term_taxonomy_id is now term_id and then you have the category slug name available to select... I really cannot understand how this query would work though. I've made a really crap mock up of it, to try to "visually" explain what i'm trying to do... SELECT * FROM wp_posts WHERE post_status = 'publish' AND (SELECT term_taxonomy_id FROM wp_term_relationships WHERE object_id = post_parent) AND (SELECT slug FROM wp_terms WHERE term_id = term_taxonomy_id) ORDER BY ID DESC LIMIT 1 Really would appreciate some help... Thanks.

    Read the article

  • database table in Magento does not exist: sales_flat_shipment_grid

    - by dene
    We're using Magento 1.4.0.1 and want to use an extension from a 3rd party developer. The extension does not work, because of a join to the table "sales_flat_shipment_grid": $collection = $model->getCollection()->join('sales/shipment_grid', 'increment_id=shipment', array('order_increment_id'=>'order_increment_id', 'shipping_name' =>'shipping_name'), null,'left'); Unfortunately this table does not exist n our database. So the error "Can't retrieve entity config: sales/shipment_grid" appears. If I comment this part out, the extension is working, but I guess, it does not proper work. Does anybody know something about this table? There are a backend-option for the catalog to use the "flat table" option, but this is only for the catalog. And the tables already exists, no matter which option is checked. Thank you a lot! :-)

    Read the article

  • database vs flat file, which is a faster structure for regex matching with many simultaneous request

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching using regex against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? TIA

    Read the article

  • database vs flat file, which is a faster structure for "regex" matching with many simultaneous reque

    - by Jamex
    Hi, which structure returns faster result and/or less taxing on the host server, flat file or database (mysql)? Assume many users (100 users) are simultaneously query the file/db. Searches involve pattern matching against a static file/db. File has 50,000 unique lines (same data type). There could be many matches. There is no writing to the file/db, just read. Is it possible to have a duplicate the file/db and write a logic switch to use the backup file/db if the main file is in use? Which language is best for the type of structure? Perl for flat and PHP for db? Addition info: If I want to find all the cities have the pattern "cis" in their names. Which is better/faster, using regex or string functions? Please recommend a strategy TIA

    Read the article

  • Sqlite3 Database versus populating Arrays

    - by Kenoy
    hi, I am working on a program that requires me to input values for 12 objects, each with 4 arrays, each with 100 values. (4800) values. The 4 arrays represent possible outcomes based on 2 boolean values... i.e. YY, YN, NN, NY and the 100 values to the array are what I want to extract based on another inputted variable. I previously have all possible outcomes in a csv file, and have imported these into sqlite where I can query then for the value using sql. However, It has been suggested to me that sqlite database is not the way to go, and instead I should populate using arrays hardcoded. Which would be better during run time and for memory management?

    Read the article

  • PHP Connect to 4D Database

    - by Matt Reid
    Trying to connect to 4D Database. PHPINFO says PDO is installed etc etc... Testing on localhost MAMP system. However when I run my code I get: Fatal error: Uncaught exception 'PDOException' with message 'could not find driver' in /Applications/MAMP/htdocs/4d/index.php:12 Stack trace: #0 /Applications/MAMP/htdocs/4d/index.php(12): PDO->__construct('4D:host=127.0.0...', 'test', 'test') #1 {main} thrown in /Applications/MAMP/htdocs/4d/index.php on line 12 My code is: $dsn = '4D:host=127.0.0.1;charset=UTF-8'; $user = 'test'; $pass = 'test'; // Connection to the 4D SQL server $db = new PDO($dsn, $user, $pass); try { echo "OK"; } catch (PDOException $e) { die("Error 4D : " . $e->getMessage()); } Can't put my finger on the error, i'm using the settings under the PHP tab... Thank you.

    Read the article

  • Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a common-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? Can PEAR interface with the MySqli prepared statements I currently use? Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) Will it slow down my site once I have a significantly large member base?

    Read the article

  • Database Design Question

    - by deniz
    Hi, I am designing a database for a project. I have a table that has 10 columns, most of them are used whenever the table is accessed, and I need to add 3 more rows; View Count Thumbs Up (count) Thumbs Down (Count) which will be used on %90 of the queries when the table is accessed. So, my question is that whether it is better to break the table up and create new table which will have these 3 columns + Foreign ID, or just make it 13 columns and use no joins? Since these columns will be used frequently, I guess adding 3 more columns is better, but if I need to create 10 more columns which will be used %90 of the time, should I add them as well, or create a new table and use joins? I am not sure when to break the table if the columns are used very frequently. Do you have any suggestions? Thanks in advance,

    Read the article

  • Order database results by bayesian rating

    - by One Trick Pony
    I'm not sure this is even possible, but I need a confirmation before doing it the "ugly" way :) So, the "results" are posts inside a database which are stored like this: the posts table, which contains all the important stuff, like the ID, the title, the content the post meta table, which contains additional post data, like the rating (this_rating) and the number of votes (this_num_votes). This data is stored in pairs, the table has 3 columns: post ID / key / value. It's basically the WordPress table structure. What I want is to pull out the highest rated posts, sorted based on this formula: br = ( (avg_num_votes * avg_rating) + (this_num_votes * this_rating) ) / (avg_num_votes + this_num_votes) which I stole form here. avg_num_votes and avg_rating are known variables (they get updated on each vote), so they don't need to be calculated. Can this be done with a mysql query? Or do I need to get all the posts and do the sorting with PHP?

    Read the article

  • Maintaining sort order of database table rows

    - by Lox
    Say I have at database table containing information about a news article in each row. The table has an integer "sort" column to dictate the order in which the articles are to be presented on a web site. How do I best implement and maintain this sort order. The problem I want to avoid is having the the articles numbered 1,2,3,4,..,100 and when article number 50 suddenly becomes interesting it gets its sort number set to 1 and then all articles between them must have their sort number increased by one. Sure, setting initial sort numbers to 100,200,300,400 etc. leaves some space for moving around but at some point it will break. Is there a correct way to do this, maybe a completely different approach? Added-1: All article titles are shown in a list linking to the contents, so yes all sorted items are show at once. Added-2: An item is not necessarily moved to the top of the list; any item can be placed anywhere in the ordered list.

    Read the article

  • Java library to partially export a database while respecting referential integrity constraints

    - by Mwanji Ezana
    My production database is several GB's uncompressed and it's getting to be a pain to download and run locally when trying to reproduce a bug or test a feature with real data. I would like to be able to select the specific records that interest me, then have the library figure out what other records are necessary to produce a dataset that respects the databases integrity constraints and finally print it out as a list of insert statements or dump that I can restore. For example: given Author, Blog and Comment tables when I select comments posted after a certain date I should get inserts for the Blog records the comments have foreign keys to and the Author records those Blogs have foreign keys to.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >