Search Results

Search found 35019 results on 1401 pages for 'sql documentation'.

Page 683/1401 | < Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >

  • Running an application from an USB device...

    - by Workshop Alex
    I'm working on a proof-of-concept application, containing a WCF service with console host and client, both on a single USB device. On the same device I will also have the client application which will connect to this service. The service uses the entity framework to connect to the database, which in this POC will just return a list of names. If it works, it will be used for a larger project. Creating the client and service was easy and this works well from USB. But getting the service to connect to the database isn't. I've found this site, suggesting that I should modify machine.config but that stops the XCopy deployment. This project cannot change any setting of the PC, so this suggestion is bad. I cannot create a deployment setup either. The whole thing just needs to run from USB disk. So, how do I get it to run? (The service just selects a list of names from the database, which it returns to the client. If this POC works, it will do far more complex things!)

    Read the article

  • how do I integrate the aspnet_users table (asp.net membership) into my existing database

    - by ooo
    i have a database that already has a users table COLUMNS: userID - int loginName - string First - string Last - string i just installed the asp.net membership table. Right now all of my tables are joined into my users table foreign keyed into the "userId" field How do i integrate asp.net_users table into my schema? here are the ideas i thought of: Add a membership_id field to my users table and on new inserts, include that new field in my users table. This seems like the cleanest way as i dont need to break any existing relationships. break all existing relationship and move all of the fields in my user table into the asp.net_users table. This seems like a pain but ultimately will lead to the most simple, normalized solution any thoughts?

    Read the article

  • Do I need to drop index on temp table?

    - by Phil
    Hi, Fairly simple question, but I don't see it anywhere else on SO: Do indexes (indices?) on a temporary table get automatically deleted with the temporary table? I'd imagine they do but I don't really know how to check to make sure. Thanks, Phil

    Read the article

  • Get smallest date for each element in access query

    - by skerit
    So I have a table containing different elements and dates. It basically looks like this: actieElement beginDatum 1 1/01/2010 1 1/01/2010 1 10/01/2010 2 1/02/2010 2 3/02/2010 What I now need is the smallest date for every actieElement. I've found a solution using a simple GROUP BY statement, but that way the query loses its scope and you can't change anything anymore. Without the GROUP BY statement I get multiple dates for every actieElement because certain dates are the same. I thought of something like this, but it also does not work as it would give the subquery more then 1 record: SELECT s1.actieElement, s1.begindatum FROM tblActieElementLink AS s1 WHERE (((s1.actieElement)=(SELECT TOP 1 (s2.actieElement) FROM tblActieElementLink s2 WHERE s1.actieElement = s2.actieElement ORDER BY s2.begindatum ASC)));

    Read the article

  • Copy new records from datatable and identify changes in old records

    - by Betite
    Assume there are two tables: Remote_table and My_table. Remote_table has 6 columns: **PROJECT JOB_TYPE MONTH YEAR** HOURS IS_DELETED 134393 70 1 2013 30 0 134393 70 2 2013 50 0 134393 70 3 2013 80 0 134393 70 10 2012 10 0 134393 70 11 2012 0 0 134393 70 12 2012 15 0 My_table is a copy of remote_table. I tried to copy only the new records from the remote_table by this query: SELECT * FROM [remote_DB].[LudanProjectManager].[dbo].Remote_table EXCEPT SELECT * FROM My_table It works OK but I get a duplicate primary key exception when changes have been made on the remote_table on the hours column. Can anyone think of a way to copy only the new records from remote_table and if changes has been made on old records, to identify them and update the my_table to correspond?

    Read the article

  • php while loop throwing a error over a $var < 10 statement

    - by William
    Okay, so for some reason this is giving me a error as seen here: http://prime.programming-designs.com/test_forum/viewboard.php?board=0 However if I take away the '&& $cur_row < 10' it works fine. Why is the '&& $cur_row < 10' causing me a problem? $sql_result = mysql_query("SELECT post, name, trip, Thread FROM (SELECT MIN(ID) AS min_id, MAX(ID) AS max_id, MAX(Date) AS max_date FROM test_posts GROUP BY Thread ) t_min_max INNER JOIN test_posts ON test_posts.ID = t_min_max.min_id WHERE Board=".$board." ORDER BY max_date DESC", $db); $num_rows = mysql_num_rows($sql_result); $cur_row = 0; while($row = mysql_fetch_row($sql_result) && $cur_row < 10) { $sql_max_post_query = mysql_query("SELECT ID FROM test_posts WHERE Thread=".$row[3].""); $post_num = mysql_num_rows($sql_max_post_query); $post_num--; $cur_row++; echo''.$cur_row.'<br/>'; echo'<div class="postbox"><h4>'.$row[1].'['.$row[2].']</h4><hr />' .$row[0]. '<br /><hr />[<a href="http://prime.programming-designs.com/test_forum/viewthread.php?thread='.$row[3].'">Reply</a>] '.$post_num.' posts omitted.</div>'; }

    Read the article

  • History tables pros, cons and gotchas - using triggers, sproc or at application level.

    - by Nathan W
    I am currently playing around with the idea of having history tables for some of my tables in my database. Basically I have the main table and a copy of that table with a modified date and an action column to store what action was preformed eg Update,Delete and Insert. So far I can think of three different places that you can do the history table work. Triggers on the main table for update, insert and delete. (Database) Stored procedures. (Database) Application layer. (Application) My main question is, what are the pros, cons and gotchas of doing the work in each of these layers. One advantage I can think of by using the triggers way is that integrity is always maintained no matter what program is implmentated on top of the database.

    Read the article

  • Getting the last element of a Postgres array, declaratively

    - by Wojciech Kaczmarek
    How to obtain the last element of the array in Postgres? I need to do it declaratively as I want to use it as a ORDER BY criteria. I wouldn't want to create a special PGSQL function for it, the less changes to the database the better in this case. In fact, what I want to do is to sort by the last word of a specific column containing multiple words. Changing the model is not an option here. In other words, I want to push Ruby's sort_by {|x| x.split[-1]} into the database level. I can split a value into array of words with Postgres string_to_array or regexp_split_to_array functions, then how to get its last element?

    Read the article

  • A SELECT statement for Mysql

    - by Hossein
    I have this table: id,bookmarkID,tagID I want to fetch the top N bookmarkIDs for a given list of tags. Does anyone know a very fast solution for this? the table is quite large(12 million records) I am using MySql

    Read the article

  • SimpleDB as Denormalized DB

    - by Max
    In an environment where you have a relational database which handles all business transactions is it a good idea to utilise SimpleDB for all data queries to have faster and more lightweight search? So the master data storage would be a relational DB which is "replicated"/"transformed" into SimpleDB to provide very fast read only queries since no JOINS and complicated subselects are needed.

    Read the article

  • How do I select a random record efficiently in MySQL?

    - by user198729
    mysql> EXPLAIN SELECT * FROM urls ORDER BY RAND() LIMIT 1; +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ | 1 | SIMPLE | urls | ALL | NULL | NULL | NULL | NULL | 62228 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+-------+---------------------------------+ The above doesn't qualify as efficient,how should I do it properly?

    Read the article

  • C# - Password Database

    - by user335932
    So I want to make a program that allows you to store and search for user names/passwords for online sites there signed up to. I know C# has some database options but I don't know much about it. I also heard that it can read/write excel files. Whats do you think is best for storing the data? ALSO do databases need to be stored online on a sever, or can they reside in the program files?

    Read the article

  • Alternatives to sign() in sqlite for custom order by

    - by Pentium10
    I have a string column which contains some numeric fields, but a lot are 0, empty string or null. The rest are numbers having different range, all positive. I tried to create a custom order by. The order would be done by two fields. First I would like to order the fields that have this number 0 and then sort by name. So something this would work: select * from table order by sign(referenceid) desc, name asc; But Sqlite lacks the sign() -1/0/1 function, and I am on Android and I can't create user defined functions. What other options I have to get this sort done.

    Read the article

  • How to automatically check out a database file in a source controlled web application ?

    - by TheRHCP
    Hello, I am working on an ASP.NET web application, we are a small team (4 students) and we do not have access to a dedicated server to host the database instance. So for this web application we decided just to put the database file in the App_Data folder. The problem is that our project is source controled on TFS, so every time you open the solution and try to launch the web application, we get an expcetion saying that database is read-only. That is logical because the databse file is not automatically checked-out. Is there a workaround to avoid a manual check-out of the database file everytime we open the solution ? Thanks.

    Read the article

  • Selecting More Than 1 Table in A Single Query

    - by Kamran
    I have 5 Tables in MS Access as Logs [Title, ID, Date, Author] Tape [Title, ID, Date, Author] Maps [Title, ID, Date, Author] VCDs [Title, ID, Date, Author] Book [Title, ID, Date, Author] I tried my level best through this code SELECT Logs.[Author], Tape.[Author], Maps.[Author], VCDs.[Author], Book.[Author] FROM Logs , Tape , Maps , VCDs, Book WHERE ((([Author] & " " & [Author] & " " & [Author] & " " & [Author]& " " & [Author]) Like "*" & [Type the Title or Any Part of the Title and Press Ok] & "*")); I want to select all of these fields in a single query. Suppose there is Adam as author of works in all tables. So when i put Adam in search box it should result from all tables. I know this can be done by having single table or renaming fields names but that's not required. Please help.

    Read the article

  • .tpl files and website problem

    - by whitstone86
    Apologies if the title is in lowercase but it's describing an extension format. I've started using Dwoo as my template engine for PHP, and am not sure how to convert my PHP files into .tpl templates. My site is similar to, but not the same as, http://library.digiguide.com/lib/programme/Medium-319648/Drama/ with its design (except colour scheme and site name are different, plus it's in PHP - so copyright issues are avoided here, the design arguably could be seen as parody even though the content is different. The database is called tvguide, and it has these tables: Programmes House M.D. Medium Police Stop! American Dad! The tablenames of the above programmes are: housemdonair mediumonair policestopair americandad1 Episodes The tablenames for the above programmes' episode guides are: housemdepidata mediumepidata policestopepidata americandad1epidata All of them have the following rows: id (not an auto-increment, since I wish to dynamically generate a page from this) episodename seriesnumber episodenumber episodesynopsis (the above four after id do exactly as stated) I have a pagination script that works, it displays 20 records per page as I want it to. This is called pmcPagination.php - but I won't post it in full since it would take up too much space. However, I'm trying to get it so that variables are filled in like this: (ok, so the examples below are ASP.NET, but if there's a PHP/MySQL equivalent I would gratefully appreciate this!!): http://library.digiguide.com/lib/episode/741168 http://library.digiguide.com/lib/episode/714829 with the episode detail and data. My site works, but it's fairly basic, and it's not online yet until my bugs are fixed. Mod_rewrite is enabled so my site reads as http://mytvguide.com/episode/123456 or http://mytvguide.com/programme/123456 http://mytvguide.com/WorldInAction/123456/Documentary/ I've tried looking on Google, but am not sure how to get this TV guide script to work at its best - but I think .tpl, and .php/MySQL is the way to go. Any advice anyone has on making this project into a fully workable, ready to use site would be much appreciated, I've spent months refining this project! P.S. Apologies for the length of this, hope it describes my project well.

    Read the article

  • Manual Linq to SQL entity framework mapping

    - by kprobst
    I've been playing with the O/R designer in VS and I was wondering if someone could shed come light on this. I'm used to OR mappers that are largely manual (homegrown and e.g., NHibernate). I don't mind encoding the entity classes myself, since they don't change all that often to begin with, and I have this irrational fear of designers and auto generated code as it is. I have noticed that the generated entity classes contain a lot of boilerplate extensibility methods, e.g. On[Property]Changed() and so on where [Property] is a mapped member of the class. These are placed in the setters of the property accessors. I assume it's OK if I don't include these when I do my hand coding, correct? They would be nice if I needed some sort of interception pattern but that's certainly not the case. I guess I just need to know if any of those methods are required by the entity framework to keep track of changes to the mapping types in order for things to work when updating the database. Thanks!

    Read the article

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Oracle 10g multiple DELETE statements

    - by bmw0128
    I'm building a dml file that first deletes records that may be in the table, then inserts records. Example: DELETE from foo where field1='bar'; DELETE from foo where fields1='bazz'; INSERT ALL INTO foo(field1, field2) values ('bar', 'x') INTO foo(field1, field2) values ('bazz', 'y') SELECT * from DUAL; When I run the insert statement by itself, it runs fine. When I run the deletes, only the last delete runs. Also, it seems to be necessary to end the multiple insert with the select, is that so? If so, why is that necessary? In the past, when using MySQL, I could just list multiple delete and insert statements, all individually ending with a semicolon, and it would run fine.

    Read the article

< Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >