Search Results

Search found 27519 results on 1101 pages for 'sql learner'.

Page 593/1101 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • LINQtoSQL Custom Constructor off Partial Class?

    - by sah302
    Hi all, I read this question here: http://stackoverflow.com/questions/82409/is-there-a-way-to-override-the-empty-constructor-in-a-class-generated-by-linqtosq Typically my constructor would look like: public User(String username, String password, String email, DateTime birthday, Char gender) { this.Id = Guid.NewGuid(); this.DateCreated = this.DateModified = DateTime.Now; this.Username = username; this.Password = password; this.Email = email; this.Birthday = birthday; this.Gender = gender; } However, as read in that question, you want to use partial method OnCreated() instead to assign values and not overwrite the default constructor. Okay so I got this : partial void OnCreated() { this.Id = Guid.NewGuid(); this.DateCreated = this.DateModified = DateTime.Now; this.Username = username; this.Password = password; this.Email = email; this.Birthday = birthday; this.Gender = gender; } However, this gives me two errors: Partial Methods must be declared private. Partial Methods must have empty method bodies. Alright I change it to Private Sub OnCreated() to remove both of those errors. However I am still stuck with...how can I pass it values as I would with a normal custom constructor? Also I am doing this in VB (converted it since I know most know/prefer C#), so would that have an affect on this?

    Read the article

  • Why is RAISERROR misspelled? Or is it not?

    - by Jason
    Why isn't RAISERROR spelled RAISEERROR? Where is the second E? I could understand if it were some ancient keyword length constraint, but I wouldn't expect it to be a nine-character limit. Is RAIS or RROR a technical word such that "raise-error" is just a mis-reading? Are its (immediate) origins in a different language? I've searched Google but not finding much on the subject.

    Read the article

  • PL/SQL - How to pull data from 3 tables based on latest created date

    - by Nancy
    Hello, I'm hoping someone can help me as I've been stuck on this problem for a few days now. Basically I'm trying to pull data from 3 tables in Oracle: 1) Orders Table 2) Vendor Table and 3) Master Data Table. Here's what the 3 tables look like: Table 1: BIZ_DOC2 (Orders table) OBJECTID (Unique key) UNIQUE_DOC_NAME (Document Name i.e. ORD-005) CREATED_AT (Date the order was created) Table 2: UDEF_VENDOR (Vendors Table): PARENT_OBJECT_ID (This matches up to the ObjectId in the Orders table) VENDOR_OBJECT_NAME (This is the name of the vendor i.e. Acme) Table 3: BIZ_UNIT (Master Data table) PARENT_OBJECT_ID (This matches up to the ObjectID in the Orders table) BIZ_UNIT_OBJECT_NAME (This is the name of the business unit i.e. widget A, widget B) Note: The Vendors Table and Master Data do not have a link between them except through the Orders table. I can join all of the data from the tables and it looks something like this: Before selecting latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 ORD-004 | Widget C | Acme | 3/10/10 Ideally I'd like to return the latest order for each vendor. However, each order may contain multiple business units (e.g. types of widgets) so if a Vendor's latest record is ORD-005 and the order contains 2 business units, here's what the result set should look like by the following columns: UNIQUE_DOC_NAME, BIZ_UNIT_OBJECT_NAME, VENDOR_OBJECT_NAME, CREATED_AT After selecting by latest order date: ORD-005 | Widget A | Acme | 3/14/10 ORD-005 | Widget B | Acme | 3/14/10 I tried using Select Max and several variations of sub-queries but I just can't seem to get it working. Any help would be hugely appreciated!

    Read the article

  • Saving data to server with user accounts.

    - by AKRamkumar
    Ok, so for an app I am making, I want the user to be able to save data online. On my website, I will provide a web server with tables of UserName/Password/SaveData. How can I do this without crashing the server load? How can I guarantee security ? Is there a Design Pattern for this?Is there a better way of doing this? This is going to be a free application, available to the public and I would like for their settings to be available, no matter the computer they are using. Is there a better way of doing this? I am using MEF for plugins so is there a way I can save plugin data as well?

    Read the article

  • Transactional isolation level needed for safely incrementing ids

    - by Knut Arne Vedaa
    I'm writing a small piece of software that is to insert records into a database used by a commercial application. The unique primary keys (ids) in the relevant table(s) are sequential, but does not seem to be set to "auto increment". Thus, I assume, I will have to find the largest id, increment it and use that value for the record I'm inserting. In pseudo-code for brevity: id = select max(id) from some_table id++ insert into some_table values(id, othervalues...) Now, if another thread started the same transaction before the first one finished its insert, you would get two identical ids and a failure when trying to insert the last one. You could check for that failure and retry, but a simpler solution might be setting an isolation level on the transaction. For this, would I need SERIALIZABLE or a lower level? Additionally, is this, generally, a sound way of solving the problem? Are the any other ways of doing it?

    Read the article

  • Unique identifiers for users

    - by Christopher McCann
    If I have a table of a hundred users normally I would just set up an auto-increment userID column as the primary key. But if suddenly we have a million users or 5 million users then that becomes really difficult because I would want to start becoming more distributed in which case an auto-increment primary key would be useless as each node would be creating the same primary keys. Is the solution to this to use natural primary keys? I am having a real hard time thinking of a natural primary key for this bunch of users. The problem is they are all young people so they do not have national insurance numbers or any other unique identifier I can think of. I could create a multi-column primary key but there is still a chance, however miniscule of duplicates occurring. Does anyone know of a solution? Thanks

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • TooManyRowsAffectedException with encrypted triggers

    - by Jon Masters
    I'm using nHibernate to update 2 columns in a table that has 3 encrypted triggers on it. The triggers are not owned by me and I can not make changes to them, so unfortunately I can't SET NOCOUNT ON inside of them. Is there another way to get around the TooManyRowsAffectedException that is thrown on commit? Update 1 So far only way I've gotten around the issue is to step around the .Save routine with var query = session.CreateSQLQuery("update Orders set Notes = :Notes, Status = :Status where OrderId = :Order"); query.SetString("Notes", orderHeader.Notes); query.SetString("Status", orderHeader.OrderStatus); query.SetInt32("Order", orderHeader.OrderHeaderId); query.ExecuteUpdate(); It feels dirty and is not easily to extend, but it doesn't crater.

    Read the article

  • Chain LINQ IQueryable, and end with Stored Procedure

    - by Alex
    I'm chaining search criteria in my application through IQueryable extension methods, e.g.: public static IQueryable<Fish> AtAge (this IQueryable<Fish> fish, Int32 age) { return fish.Where(f => f.Age == age); } However, I also have a full text search stored procedure: CREATE PROCEDURE [dbo].[Fishes_FullTextSearch] @searchtext nvarchar(4000), @limitcount int AS SELECT Fishes.* FROM Fishes INNER JOIN CONTAINSTABLE(Fishes, *, @searchtext, @limitcount) AS KEY_TBL ON Fishes.Id = KEY_TBL.[KEY] ORDER BY KEY_TBL.[Rank] The stored procedure obviously doesn't return IQueryable, however, is it possible to somehow limit the result set for the stored procedure using IQueryable's? I'm envisioning something like .AtAge(5).AboveWeight(100).Fishes_FulltextSearch("abc"). In this case, the fulltext search should execute on a smaller subset of my Fishes table (narrowed by Age and Weight). Is something like this possible? Sample code?

    Read the article

  • select rows with column that is not null?

    - by fayer
    by default i have one column in mysql table to be NULL. i want to select some rows but only if the field value in that column is not NULL. what is the correct way of typing it? $query = "SELECT * FROM names WHERE id = '$id' AND name != NULL"; is this correct?

    Read the article

  • Converting delimited string to multiple values in mysql

    - by epo
    I have a mysql legacy table which contains an client identifier and a list of items, the latter as a comma-delimited string. E.g. "xyz001", "foo,bar,baz". This is legacy stuff and the user insists on being able to edit a comma delimited string. They now have a requirement for a report table with the above broken into separate rows, e.g. "xyz001", "foo" "xyz001", "bar" "xyz001", "baz" Breaking the string into substrings is easily doable and I have written a procedure to do this by creating a separate table, but that requires triggers to deal with deletes, updates and inserts. This query is required rarely (say once a month) but has to be absolutely up to date when it is run, so e.g. the overhead of triggers is not warranted and scheduled tasks to create the table might not be timely enough. Is there any way to write a function to return a table or a set so that I can join the identifier with the individual items on demand?

    Read the article

  • Split function in where clause

    - by abhishek-khandelwal
    hello friends I am using following query in linq In product table following type of data are stored abc-def bcd=fgh abc-xyz var query=from prod in db.Product join cat in db.category on prod.categoryId=cat.categoryID where prod.productName.split('-')[0]=="abc" but in that query it product annoumous problem Please give some suggestion to split in where caluse

    Read the article

  • When is referential integrity not appropriate?

    - by Curtis Inderwiesche
    I understand the need to have referential integrity for limiting specific values on entry or possibly preventing them from removal upon a request of deletion. However, I am unclear as to a valid use case which would exclude this mechanism from always being used. I guess this would fall into several sub-questions: When is referential integrity not appropriate? Is it appropriate to have fields containing multiple and/or possibly incomplete subsets of a foreign key's list? Typically, should this be a schema structure design decision or an interface design decision? (Or possibly neither or both) Thoughts?

    Read the article

  • How to debug issues with differing execution times in different contexts.

    - by Dave
    The following question seems to be haunting me more consistently than most other questions recently. What kinds of things would you suggest I suggest that they look for when trying to debug "performance issues" like this? ok, get this - running this in query analyzer takes < 1 second exec usp_MyAccount_Allowance_Activity '1/1/1900', null, 187128 debugging locally, this takes 10 seconds: DataSet allowanceBalance = SqlHelper.ExecuteDataset( WebApplication.SQLConn(), CommandType.StoredProcedure, "usp_MyAccount_Allowance_Activity", Params); same parameters

    Read the article

  • why this left join query failed to load all the data in left table ?

    - by lzyy
    users table +-----+-----------+ | id | username | +-----+-----------+ | 1 | tom | | 2 | jelly | | 3 | foo | | 4 | bar | +-----+-----------+ groups table +----+---------+-----------------------------+ | id | user_id | title | +----+---------+-----------------------------+ | 2 | 1 | title 1 | | 4 | 1 | title 2 | +----+---------+-----------------------------+ the query SELECT users.username,users.id,count(groups.title) as group_count FROM users LEFT JOIN groups ON users.id = groups.user_id result +----------+----+-------------+ | username | id | group_count | +----------+----+-------------+ | tom | 1 | 2 | +----------+----+-------------+ where is the rest users' info? the result is the same as inner join , shouldn't left join return all left table's data? PS:I'm using mysql

    Read the article

  • Why would I do an inner join on a non-distinct field?

    - by froadie
    I just came across a query that does an inner join on a non-distinct field. I've never seen this before and I'm a little confused about this usage. Something like: SELECT distinct all, my, stuff FROM myTable INNER JOIN myOtherTable ON myTable.nonDistinctField = myOtherTable.nonDistinctField (WHERE some filters here...) I'm not quite sure what my question is or how to phrase it, or why exactly this confuses me, but I was wondering if anyone could explain why someone would need to do an inner join on a non-distinct field and then select only distinct values...? Is there ever a legitimate use of an inner join on a non-distinct field? What would be the purpose? And if there's is a legitimate reason for such a query, can you give examples of where it would be used?

    Read the article

  • MySQL slow query

    - by andrhamm
    SELECT items.item_id, items.category_id, items.title, items.description, items.quality, items.type, items.status, items.price, items.posted, items.modified, zip_code.state_prefix, zip_code.city, books.isbn13, books.isbn10, books.authors, books.publisher FROM ( ( items LEFT JOIN bookitems ON items.item_id = bookitems.item_id ) LEFT JOIN books ON books.isbn13 = bookitems.isbn13 ) LEFT JOIN zip_code ON zip_code.zip_code = items.item_zip WHERE items.rid = $rid` I am running this query to get the list of a user's items and their location. The zip_code table has over 40k records and this might be the issue. It currently takes up to 15 seconds to return a list of about 20 items! What can I do to make this query more efficient?

    Read the article

  • Concatenating rows from different tables into one field

    - by Markus
    Hi! In a project using a MSSQL 2005 Database we are required to log all data manipulating actions in a logging table. One field in that table is supposed to contain the row before it was changed. We have a lot of tables so I was trying to write a stored procedure that would gather up all the fields in one row of a table that was given to it, concatenate them somehow and then write a new log entry with that information. I already tried using FOR XML PATH and it worked, but the client doesn't like the XML notation, they want a csv field. Here's what I had with FOR XML PATH: DECLARE @foo varchar(max); SET @foo = (SELECT * FROM table WHERE id = 5775 FOR XML PATH('')); The values for "table", "id" and the actual id (here: 5775) would later be passed in via the call to the stored procedure. Is there any way to do this without getting XML notation and without knowing in advance which fields are going to be returned by the SELECT statement?

    Read the article

  • Group keywords by site

    - by Skudd
    I am finding a lot of useful help here today, and I really appreciate it. This should be the last one for the day: I have a list of the top 10 keywords per site, sorted by visits, by date. The records need to be sorted as follows (excuse the formatting): 2010-05 2010-04 site1.com keyword1 apples wine keyword1 visits 100 12 keyword2 oranges water keyword2 visits 99 10 site2.com keyword1 blueberry cornbread keyword1 visits 90 100 keyword2 squares biscuits keyword2 visits 80 99 Basically what I need to accomplish involves grouping, but I can't seem to figure it out. Am I heading down the right path, or is there another way to achieve this, or is it just impossible?

    Read the article

  • What data structures and algorithms are applied within data warehouse cubes?

    - by Jeff Meatball Yang
    I understand that cubes are optimized data structures for aggregating and "slicing" large amounts of data. I just don't know how they are implemented. I can imagine a lot of this technology is proprietary, but are there any resources that I could use to start implementing my own cube technology? Set theory and lots of math are probably involved (and welcome as suggestions!), but I'm primarily interested in implementations: the data structures and query algorithms. Thanks!

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >