Search Results

Search found 40870 results on 1635 pages for 'database design'.

Page 484/1635 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Linq insert statement inserts nothing, does not fail either

    - by pietjepoeier
    I am trying to insert a new account in my Acccounts table with linq. I tried using the EntityModel and Linq2Sql. I get no insert into my database nor an exception of any kind. public static Linq2SQLDataContext dataContext { get { return new Linq2SQLDataContext(); } } try { //EntityModel Accounts acc = Accounts.CreateAccounts(0, Voornaam, Straat, Huisnummer, Stad, Land, 15, EmailReg, Password1); Entities.AddToAccounts(acc); Entities.SaveChanges(); //Linq 2 SQL Account account = new Account { City = Stad, Country = Land, EmailAddress = EmailReg, Name = Voornaam, Password = Password1, Street = Straat, StreetNr = Huisnummer, StreetNrAdd = Toevoeging, Points = 25 }; dataContext.Accounts.InsertOnSubmit(account); var conf = dataContext.ChangeConflicts; // No changeConflicts ChangeSet set = dataContext.GetChangeSet(); // 0 inserts, 0 updates, 0 deletes try { dataContext.SubmitChanges(); } catch (Exception ex) { } } catch (EntityException ex) { }

    Read the article

  • How hard is it to modify the Django Models?

    - by alex
    I am doing geolocation, and Django does not have a PointField. So, I am forced to writing in RAW SQL. GeoDjango, the Django library, does not support the following query for MYSQL databases (can someone verify that for me?) cursor.execute("SELECT id FROM l_tag WHERE\ (GLength(LineStringFromWKB(LineString(asbinary(utm),asbinary(PointFromWKB(point(%s, %s)))))) < %s + accuracy + %s)\ I don't nkow why GeoDjango library cannot do this in MYSQL database. I hate writing RAW SQL for calculating distances between two points. Is there a way I can create my own library for Django that can handle this? If so, how hard is it?

    Read the article

  • Thoughts on a Shoutbox anyone?

    - by sologhost
    I'm wanting to create a shoutbox, though I'm wondering if there is another way to go about this rather than using setInterval to query the database for new shouts every number of seconds. Honestly, I don't like having to go about it this way. Seems a bit redundant and repetitive and just plain old wrong. Not to mention the blinking of the shouts as it grabs the data. So I'm wondering on how the professionals do this? I mean, I've seen shoutboxes that work surperb and doesn't seem to be using any setInterval or setTimeout javascript functions to do this. Can anyone suggest any ideas or an approach to this that doesn't use setInterval or setTimeout?? Thanks :)

    Read the article

  • Fast search in XMl files in .NET (or How to index XML files)

    - by codymanix
    I have to implement a search feature which is able to quickly perform arbitrary complex queries to XML-data. If the user makes a query, all XML files must be searched to find possible matches. The users will have lots of XML-Files (a few 10000 or more) which are typically a few kilobytes in size. All the XML-files have almost the same structure. I already benchmarked XPath, it is too slow for my needs. How can it be done most efficiently? Is is possible to create indexes for the contents of the XML files (preserving content semantics, not just plain fulltext search)? Will it be useful to put the XML data into an (embedded) SQL database and do the queries with SQL? What other possibilities do I have?

    Read the article

  • Android SQLite Problem: Program Crash When Try a Query!

    - by Skatephone
    Hi i have a problem programming with android SDK 1.6. I'm doing the same things of the "notepad exaple" but the programm crash when i try some query. If i try to do a query directly in to the DatabaseHelper create() metod it goes, but out of this function it doesn't. Do you have any idea? this is the source: public class DbAdapter { public static final String KEY_NAME = "name"; public static final String KEY_TOT_DAYS = "totdays"; public static final String KEY_ROWID = "_id"; private static final String TAG = "DbAdapter"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "flowratedb"; private static final String DATABASE_TABLE = "girl_data"; private static final String DATABASE_TABLE_2 = "girl_cyle"; private static final int DATABASE_VERSION = 2; /** * Database creation sql statement */ private static final String DATABASE_CREATE = "create table "+DATABASE_TABLE+" (id integer, name text not null, totdays int);"; private static final String DATABASE_CREATE_2 = "create table "+DATABASE_TABLE_2+" (ref_id integer, day long not null);"; private final Context mCtx; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); db.execSQL(DATABASE_CREATE_2); db.delete(DATABASE_TABLE, null, null); db.delete(DATABASE_TABLE_2, null, null); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS "+DATABASE_TABLE); db.execSQL("DROP TABLE IF EXISTS "+DATABASE_TABLE_2); onCreate(db); } } public DbAdapter(Context ctx) { this.mCtx = ctx; } public DbAdapter open() throws SQLException { mDbHelper = new DatabaseHelper(mCtx); mDb = mDbHelper.getWritableDatabase(); return this; } public void close() { mDbHelper.close(); } public long createGirl(int id,String name, int totdays) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_ROWID, id); initialValues.put(KEY_NAME, name); initialValues.put(KEY_TOT_DAYS, totdays); return mDb.insert(DATABASE_TABLE, null, initialValues); } public long createGirl_fd_day(int refid, long fd) { ContentValues initialValues = new ContentValues(); initialValues.put("ref_id", refid); initialValues.put("calendar", fd); return mDb.insert(DATABASE_TABLE, null, initialValues); } public boolean updateGirl(int rowId, String name, int totdays) { ContentValues args = new ContentValues(); args.put(KEY_NAME, name); args.put(KEY_TOT_DAYS, totdays); return mDb.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) > 0; } public boolean deleteGirlsData() { if (mDb.delete(DATABASE_TABLE_2, null, null)>0) if(mDb.delete(DATABASE_TABLE, null, null)>0) return true; return false; } public Bundle fetchAllGirls() { Bundle extras = new Bundle(); Cursor cur = mDb.query(DATABASE_TABLE, new String[] {KEY_ROWID, KEY_NAME, KEY_TOT_DAYS}, null, null, null, null, null); cur.moveToFirst(); int tot = cur.getCount(); extras.putInt("tot", tot); int index; for (int i=0;i<tot;i++){ index=cur.getInt(cur.getColumnIndex("_id")); extras.putString("name"+index, cur.getString(cur.getColumnIndex("name"))); extras.putInt("totdays"+index, cur.getInt(cur.getColumnIndex("totdays"))); } cur.close(); return extras; } public Cursor fetchGirl(int rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE, new String[] {KEY_ROWID, KEY_NAME, KEY_TOT_DAYS}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } public Cursor fetchGirlCD(int rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE_2, new String[] {"ref_id", "day"}, "ref_id=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } } Tank's Valerio From Italy :)

    Read the article

  • How do I code a MVC3 Helper

    - by Mike Clarke
    I’ve just build my first Helper in MVC, it’s very basic and just displays a string where ever I use it. So it’s a .cshtml file in my App_Code folder, I think that is how it's supposed to be set up, with the following code in it, @helper DisplaySelect() { @:This text is coming from an helper class. } Now I am a wiz with helpers how do I make it do things. E.g.. say I want it to query the database and display something, I would normally do that work in my controller. How do I do that with helpers, do I create a helper controller and then treat the helper like a partial view??? Any help would be greatly appreciated. Cheers, Mike.

    Read the article

  • Java or php tree structure problem

    - by agazerboy
    Hi All, I have all my data in my database. It has following 4 columns id source_clust target_clust result_clust 1 7 72 649 2 9 572 650 3 649 454 651 4 32 650 435 This data is like tree structure. source_clust and target_clust generate target_clust. target_clust can be source_clust or target_clust to make a new target_clust. Is there any php function or class that I can use to generate tree structure for my data???? I see this MySql site they are doing exactly what I need but I couldn't find how to implement that query in my data. Thanks ! Edited Is there any way in java to do it? if we have same data in array ??

    Read the article

  • JavaDB connection error (network protocol)

    - by oO
    I'm trying to connect to derby using this: dbProperties.put("create", "true"); dbProperties.put("dataEncryption", "true"); dbProperties.put("encryptionAlgorithm", "DES/CBC/NoPadding"); dbProperties.put("encryptionKey", "1234567890123456"); dbProperties.put("securityMechanism", ClientDataSource.STRONG_PASSWORD_SUBSTITUTE_SECURITY); // protocol is dbProperties.getProperty("derby.url", "jdbc:derby://localhost:1527/"); dbConnection = DriverManager.getConnection(protocol + dbName, dbProperties); but i get an error: A connection could not be established because the database name (...) is larger than the maximum length allowed by the network protocol. Is there a way to increase this length?

    Read the article

  • SQL Average Data Based on Distance

    - by jsmith
    I'm pretty new to SQL. I have a database with records based on road/milepoints. My goal is to get an average value every 52.8 ft along the road. My related table has data every 15 ft, this table of course has a foreign key relating it to the primary table. If I wanted to pull out the average value every 52.8 ft, along a given milepost, how would I go about this? Example Data: RecID Begin_MP End_MP 100 0 0.56 RecID MP Value1 Value2 100 0 159 127.7 100 0.003 95.3 115.3 100 0.006 82.3 107 100 0.009 56.5 74.5 100 0.011 58.1 89.1 100 0.014 95.2 78.8 100 0.017 108.9 242.5 100 0.02 71.8 73.3 100 0.023 84.1 80.2 100 0.026 65.5 66.1 100 0.028 122 135.8 100 0.031 99.9 230.7 100 0.034 95.7 111.5 100 0.037 127.3 74.3 100 0.04 140.7 543.1 The first Data is an example of a Road. The second subset of data are the values I need to query out every 52.8 ft. Thank you

    Read the article

  • Input array is longer than the number of columns in this table

    - by Adam
    I've recently started to use SQLite and began to integrate it into a C# project I'm working on. However, randomly my project will throw the exception: Input array is longer than the number of columns in this table I'm having a hard time trying the trace the problem because it seems to be thrown on a random basis. DataTable table = new DataTable(); //exception is thrown here table = Global.db.ExecuteQuery("SELECT * FROM vm_manager"); Some of the data that gets returned from this query is as follows: http://i.imgur.com/9rlLN.png If anyone has any advice, I'd be grateful. EDIT: I'm unable to show the execute query function as it resides inside a dll from the following sql lite wrapper http://www.codeproject.com/KB/database/cs_sqlitewrapper.aspx

    Read the article

  • Representing Sparse Data in PostgreSQL

    - by Chris S
    What's the best way to represent a sparse data matrix in PostgreSQL? The two obvious methods I see are: Store data in a single a table with a separate column for every conceivable feature (potentially millions), but with a default value of NULL for unused features. This is conceptually very simple, but I know that with most RDMS implementations, that this is typically very inefficient, since the NULL values ususually takes up some space. However, I read an article (can't find its link unfortunately) that claimed PG doesn't take up data for NULL values, making it better suited for storing sparse data. Create separate "row" and "column" tables, as well as an intermediate table to link them and store the value for the column at that row. I believe this is the more traditional RDMS solution, but there's more complexity and overhead associated with it. I also found PostgreDynamic, which claims to better support sparse data, but I don't want to switch my entire database server to a PG fork just for this feature. Are there any other solutions? Which one should I use?

    Read the article

  • Inserting Large volume of data in SQL Server 2005

    - by Manjoor
    We have a application (written in c#) to store live stock market price in the database (SQL Server 2005). It insert about 1 Million record in a single day. Now we are adding some more segment of market into it and the no of records would be double (2 Millions/day). Currently the average record insertion per second is about 50, maximum is 450 and minimum is 0. To check certain conditions i have used service broker (asynchronous trigger) on my price table. It is running fine at this time(about 35% CPU utilization). Now i am planning to create a in memory dataset of current stock price. we would like to do some simple calculations. I want to know different views of members on this. Please provide your way of dealing with such situation.

    Read the article

  • Can Microsoft store three-valued fields in a single bit?

    - by fenomas
    I'm completely ignorant of SQL/databases, but I was chatting with a friend who does a lot of database work about how some databases use a "boolean" field that can take a value of NULL in addition to true and false. Regarding this, he made a comment along these lines: "To Microsoft's credit, they have never referred to that kind of field as a boolean, they just call it a bit. And it's a true bit - if you have eight or fewer bit fields in a record, it only requires one byte to store them all." Naturally that seems impossible to me - if the field can hold three values you're not going to fit eight of them into a byte. My friend agreed that it seemed odd, but begged ignorance of the low-level internals and said that so far as he knew, such fields can hold three values when viewed from the SQL side, and it does work out to require a byte of storage. I imagine one of us has a wire crossed. Can anyone explain what's really going on here?

    Read the article

  • merging two duplicate contacts/ColdFusion

    - by jil
    having to do with data integrity - I maintain a coldfusion database at a small shop that keeps addresses of different contacts. These contacts sometimes contain notes in them. When you are merging two duplicate contacts, one may be created in 2002 and one in 2008. If the contact in 2002 has notes prior to 2008, my question would be does it matter if you merge these contacts and keep the 2008 contact's ID number? Would that affect the data integrity or create any sort of issues with the notes earlier than 2008? I hope I've accurately described my scenario, as I am not familiar with the proper technical terms. I really appreciate the help sir!

    Read the article

  • Oracle TNS problems ?

    - by persistence
    I have an error ? My pl/Sql Developer says my oracle database cannot find the service descriptor But when I Do a check the listener I get this error. LSNRCTL> start Starting tnslsnr: please wait... Service OracleOraDb10g_home1TNSListener already running. TNS-12560: TNS:protocol adapter error TNS-00530: Protocol adapter error LSNRCTL> status Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener 32-bit Windows Error: 61: Unknown error Please I have a deadline for these evening. Please help.

    Read the article

  • Best way to store large dataset in SQL Server?

    - by gary
    I have a dataset which contains a string key field and up to 50 keywords associated with that information. Once the data has been inserted into the database there will be very few writes (INSERTS) but mostly queries for one or more keywords. I have read "Tagsystems: performance tests" which is MySQL based and it seems 2NF appears to be a good method for implementing this, however I was wondering if anyone had experience with doing this with SQL Server 2008 and very large datasets. I am likely to initially have 1 million key fields which could have up to 50 keywords each. Would a structure of keyfield, keyword1, keyword2, ... , keyword50 be the best solution or two tables keyid keyfield | 1 | | M keyid keyword Be a better idea if my queries are mostly going to be looking for results that have one or more keywords?

    Read the article

  • Methods for ensuring security between users in multi-user applications

    - by Emilio
    I'm writing a multiuser application (.NET - C#) in which each user's data is separated from the others and there is no data that's common between users. It's critical to ensure that no user has access to another user's data. What are some approaches for implementing security at the database level and/or in the application architecture to to accomplish this? For example (and this is totally made up - I'm not suggesting it's a good or bad approach) including a userID column in all data tables might be an approach. I'm developing the app in C# (asp.net) and SQL Server 2008. I'm looking for options that are are either native in the tools I'm using or general patterns.

    Read the article

  • Server authorization with MD5 and SQL.

    - by Charles
    I currently have a SQL database of passwords stored in MD5. The server needs to generate a unique key, then sends to the client. In the client, it will use the key as a salt then hash together with the password and send back to the server. The only problem is that the the SQL DB has the passwords in MD5 already. Therefore for this to work, I would have to MD5 the password client side, then MD5 it again with the salt. Am I doing this wrong, because it doesn't seem like a proper solution. Any information is appreciated.

    Read the article

  • free sql script to get a list of countries, provinces/states and their cities.

    - by reggie
    I am working on a registration page for a php website as mysql as the backend database. I need a sql script to insert the list of countries with their associated provinces and the provinces with their associated cities. I need all the countries, provinces and cities all over the world which are related to each other. I can get the individual list of countries, provinces and cities but there is no list that relates them together.. any help appreciated.. thanks in advance.

    Read the article

  • SQL Server - Query Short-Circuiting?

    - by Sam Schutte
    Do T-SQL queries in SQL Server support short-circuiting? For instance, I have a situation where I have two database and I'm comparing data between the two tables to match and copy some info across. In one table, the "ID" field will always have leading zeros (such as "000000001234"), and in the other table, the ID field may or may not have leading zeros (might be "000000001234" or "1234"). So my query to match the two is something like: select * from table1 where table1.ID LIKE '%1234' To speed things up, I'm thinking of adding an OR before the like that just says: table1.ID = table2.ID to handle the case where both ID's have the padded zeros and are equal. Will doing so speed up the query by matching items on the "=" and not evaluating the LIKE for every single row (will it short circuit and skip the LIKE)?

    Read the article

  • What stage of normalization is this? (moving repeating data into separate table)

    - by Sergio
    Hi There, I have noticed that when designing a database I tend to shift any repeating sets of data into a separate table. For example, say I had a table of people, with each person living in a state. I would then move these repeating states into a separate table and reference them with foreign keys. However, what if I was not storing any more data about states. I would then have a table with StateID and State in. Is this action correct? State is dependant on the primary key of the users table, so does shifting it into its own table help with anything? Thanks,

    Read the article

  • Does DB2 have an "insert or update" statement?

    - by Mikael Eriksson
    From my code (Java) I want to ensure that a row exists in the database (DB2) after my code is executed. My code now does a select and if no result is returned it does an insert. I really don't like this code since it exposes me to concurrency issuses when running in a multi-threaded environment. What I would like to do is to put this logic in DB2 instead of in my Java code. Does DB2 have an "insert-or-update" statement? Or anything like it that I can use? For example: insertupdate into mytable values ('myid') Another way of doing it would probably be to allways do the insert and catch "SQL-code -803 primary key already exists", but I would like to avoid that if possible.

    Read the article

  • Analysis services with non normalized table

    - by Uwe
    I have a table with several million rows. Each row represents a user session. There is a column called user which is not unique. There can be multiple sessions per user. I want to use Analysis services to get me the additional properties per user. Example: How many users (unique!) had a session longer than x minutes. How is that possible without changing the database. Note: there is no lookup-table and I cannot create one. What I am able of at the moment is to ask how many sessions were longer then x minutes.

    Read the article

  • How to handle request-wise DB transactions in ASP.NET MVC?

    - by Dario Solera
    I'm using SubSonic 3.0 (SimpleRepository) to handle database access in my ASP.NET MVC 1.0 application. It would be nice to handle a transaction for every web request, committing if everything went smooth and rolling back in case of exception. Is this possible? If so, how? I know this topic has been discussed many times, but I just couldn't find a satisfactory answer. I have built my own solution (create a TransactionScope in the controller, then commit/rollback in OnActionExecuted), but it turns out to be very unreliable.

    Read the article

  • MS Sql Full-text search vs. LIKE expression

    - by Marks
    Hi. I'm currently looking for a way to search a big database (500MB - 10GB or more on 10 tables) with a lot of different fields(nvarchars and bigints). Many of the fields, that should be searched are not in the same table. An example: A search for '5124 Peter' should return all items, that ... have an ID with 5124 in it, have 'Peter' in the title or description have item type id with 5124 in it created by a user named 'peter' or a user whose id has 5124 in it created by a user with '5124' or 'peter' in his street address. How should i do the search? I read that the full-text search of MS-Sql is a lot more performant than a query with the LIKE keyword and i think the syntax is more clear, but i think it cant search on bigint(id) values and i read it has performance problems with indexing and therefore slows down inserts to the DB. In my project there will be more inserting than reading, so this could be a matter. Thanks in advance, Marks

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >