Search Results

Search found 40744 results on 1630 pages for 'sql interview questions a'.

Page 587/1630 | < Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >

  • rails: date type and GetDate

    - by cbrulak
    This is a follow up to this question: http://stackoverflow.com/questions/2930256/unique-responses-rails-gem I'm going to create an index based on the user id, url and a date type. I want date type (not datetime type) because I want the day, the 24 hour day to be part of the index to avoid duplication of page views counts on the same day. In other words: A view only counts once in a day by a visitor. I also want the default value of that column (viewdate) to be the function GETDATE(). This is what I have in my migration: execute "ALTER TABLEpage_viewsADD COLUMN viewdate datetime DEFAULTGETDATE()`" But the value viewdate is always empty. What am I missing? (as an aside, any other suggestions for accomplishing this goal?)

    Read the article

  • ADO.NET parameters from TextBox

    - by Geo Ego
    I'm trying to call a parameterized stored procedure from SQL Server 2005 in my C# Winforms app. I add the parameters from TextBoxeslike so (there are 88 of them): cmd.Parameters.Add("@CustomerName", SqlDbType.VarChar, 100).Value = CustomerName.Text; I get the following exception: "System.InvalidCastException: Failed to convert parameter value from a TextBox to a String. ---> System.InvalidCastException: Object must implement IConvertible." The line throwing the error is when I call the query: cmd.ExecuteNonQuery(); I also tried using the .ToString() method on the TextBoxes, which seemed pointless anyway, and threw the same error. Am I passing the parameters incorrectly?

    Read the article

  • MSSQL Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using MS SQL 2008 currently.

    Read the article

  • Should I commit or rollback a transaction that creates a temp table, reads, then deletes it?

    - by Triynko
    To select information related to a list of hundreds of IDs... rather than make a huge select statement, I create temp table, insert the ids into it, join it with a table to select the rows matching the IDs, then delete the temp table. So this is essentially a read operation, with no permanent changes made to any persistent database tables. I do this in a transaction, to ensure the temp table is deleted when I'm finished. My question is... what happens when I commit such a transaction vs. let it roll it back? Performance-wise... does the DB engine have to do more work to roll back the transaction vs committing it? Is there even a difference since the only modifications are done to a temp table? Related question here, but doesn't answer my specific case involving temp tables: http://stackoverflow.com/questions/309834/should-i-commit-or-rollback-a-read-transaction

    Read the article

  • how to read the txt file from the database(line by line)

    - by Ranjana
    i have stored the txt file to sql server database . i need to read the txt file line by line to get the content in it. my code : DataTable dtDeleteFolderFile = new DataTable(); dtDeleteFolderFile = objutility.GetData("GetTxtFileonFileName", new object[] { ddlSelectFile.SelectedItem.Text }).Tables[0]; foreach (DataRow dr in dtDeleteFolderFile.Rows) { name = dr["FileName"].ToString(); records = Convert.ToInt32(dr["NoOfRecords"].ToString()); bytes = (Byte[])dr["Data"]; } FileStream readfile = new FileStream(Server.MapPath("txtfiles/" + name), FileMode.Open); StreamReader streamreader = new StreamReader(readfile); string line = ""; line = streamreader.ReadLine(); but here i have used the FileStream to read from the Particular path. but i have saved the txt file in byt format into my Database. how to read the txt file using the byte[] value to get the txt file content, instead of using the Path value.

    Read the article

  • Calculate hours difference in datetime

    - by ScG
    I have a series of datetime values. I want to select records with a difference of 2 or more hours between them. 2010-02-11 08:55:00.000 2010-02-11 10:45:00.000 2010-02-11 10:55:00.000 2010-02-11 12:55:00.000 2010-02-11 14:52:00.000 2010-02-11 16:55:00.000 2010-02-11 17:55:00.000 2010-02-11 23:55:00.000 2010-02-12 00:55:00.000 2010-02-12 02:55:00.000 Expected (The next date compared is with the last date that qualified for the 2 hr difference): 2010-02-11 08:55:00.000 2010-02-11 10:55:00.000 2010-02-11 12:55:00.000 2010-02-11 16:55:00.000 2010-02-11 23:55:00.000 2010-02-12 02:55:00.000 I am using SQL 2005 or 2008

    Read the article

  • How do I get the position of a result in the list after an order_by?

    - by Bob Bob
    I'm trying to find an efficient way to find the rank of an object in the database related to it's score. My naive solution looks like this: rank = 0 for q in Model.objects.all().order_by('score'): if q.name == 'searching_for_this' return rank rank += 1 It should be possible to get the database to do the filtering, using order_by: Model.objects.all().order_by('score').filter(name='searching_for_this') But there doesn't seem to be a way to retrieve the index for the order_by step after the filter. Is there a better way to do this? (Using python/django and/or raw SQL.) My next thought is to pre-compute ranks on insert but that seems messy.

    Read the article

  • add Constraint on database with trigger

    - by Am1rr3zA
    Hi, I have 3 tables (Student, Course, student_course_choose(have field grade)) I defined a view on these 3 tables that get me an Average of the each student. I want to have constraint(with trigger) on these view(or on the table that need it) to limit the average of each student between 13 and 18. I somewhere read that I must use foreach statement(instead of foreach row) on trigger because when I decrease some grade of special student and his/her average become less than 13 they don't give me error (because later I increase grade of another his/her course ). how must I wrote this Trigger? (I want to implement aprh for testing trigger) note:I can write it in SQL server, oracle or Mysql no diff for me.

    Read the article

  • django left join with null

    - by SledgehammerPL
    The model: class Product(models.Model): name = models.CharField(max_length = 128) def __unicode__(self): return self.name class Receipt(models.Model): name = models.CharField(max_length=128) components = models.ManyToManyField(Product, through='ReceiptComponent') class Admin: pass def __unicode__(self): return self.name class ReceiptComponent(models.Model): product = models.ForeignKey(Product) receipt = models.ForeignKey(Receipt) quantity = models.FloatField(max_length=9) unit = models.ForeignKey(Unit) def __unicode__(self): return unicode(self.quantity!=0 and self.quantity or '') + ' ' + unicode(self.unit) + ' ' + self.product.genitive The idea: there are a components on stock. I'd like to find out which recipes I can made with components which I have. It's not easy - but possible - I made a SQL view, which gets the solution. But I'm learning python and Django so I'd like to make it Django-style ;D The concept of solution: get the set of recipes which has at last one component: list_of_available_components = ReceiptComponent.objects.filter(product__in=list_of_available_products).distinct() list_of_related_receipts = Receipt.objects.filter(receiptcomponent__in = list_of_available_components).distinct() get recipes (from list_of_related_receipts) which has not at last one component list_of_incomplete_recipes = (SELECT * FROM drinkbook_receiptcomponent LEFT JOIN drinkstore_stock_products USING(product_id) WHERE drinkstore_stock_products.stock_id IS NULL AND receipt_id IN (SELECT receipt_id FROM drinkbook_receiptcomponent JOIN drinkstore_stock_products USING(product_id))) get recipes (from list_of_related_receipts) which are not in "list_of_incomplete_recipes"

    Read the article

  • Trying to use VB to automate some queries. Running into what looks like a string problem

    - by Jeff
    Hi there I'm using MS Access 2003 and I'm trying to execute a few queries at once using VB. When I write out the query in SQL it works fine, but when I try to do it in VB it asks me to "Enter Parameter Value" for DEPA, then DND (which are the first few letters of a two strings I have). Here's the code: Option Compare Database Public Sub RemoveDupelicateDepartments() Dim oldID As String Dim newID As String Dim sqlStatement As String oldID = "DND-01" newID = "DEPA-04" sqlStatement = "UPDATE [Clean student table] SET [HomeDepartment]=" & newID & " WHERE [HomeDepartment]=" & oldID & ";" DoCmd.RunSQL sqlStatement & "" End Sub It looks to me as though it's taking in the string up to the - then nothing else. I dunno, that's why I'm asking lol. What should my code look like?

    Read the article

  • T-SQL foreign key check constraint

    - by PaN1C_Showt1Me
    When you create a foreign key constraint in a table and you create the script in MS SQL Management Studio, it looks like this. ALTER TABLE T1 WITH CHECK ADD CONSTRAINT FK_T1 FOREIGN KEY(project_id) REFERENCES T2 (project_id) GO ALTER TABLE T1 CHECK CONSTRAINT FK_T1 GO What I don't understand is what purpose has the second alter with check constraint. Isn't creating the FK constraint enough? Do you have to add the check constraint to assure reference integrity ? Another question: how would it look like then when you'd write it directly in the column definition? CREATE TABLE T1 ( my_column INT NOT NULL CONSTRAINT FK_T1 REFERENCES T2(my_column) ) Isn't this enough?

    Read the article

  • Combine query results from one table with the defaults from another

    - by pulegium
    This is a dumbed down version of the real table data, so may look bit silly. Table 1 (users): id INT username TEXT favourite_food TEXT food_pref_id INT Table 2 (food_preferences): id INT food_type TEXT The logic is as follows: Let's say I have this in my food preference table: 1, 'VEGETARIAN' and this in the users table: 1, 'John', NULL, 1 2, 'Pete', 'Curry', 1 In which case John defaults to be a vegetarian, but Pete should show up as a person who enjoys curry. Question, is there any way to combine the query into one select statement, so that it would get the default from the preferences table if the favourite_food column is NULL? I can obviously do this in application logic, but would be nice just to offload this to SQL, if possible. DB is SQLite3...

    Read the article

  • I am not able to drop foreign key in mysql Error 150. Please help

    - by Shantanu Gupta
    i am trying to create a foreign key in my table. But when i executes my query it shows me error 150 Error Code : 1005 Can't create table '.\vts#sql-6ec_1.frm' (errno: 150) (0 ms taken) My Queries are Query to create a foreign Key alter table `vts`.`tblguardian` add constraint `FK_tblguardian` FOREIGN KEY (`GuardianPickPointId`) REFERENCES `tblpickpoint` (`PickPointId`) EDIT: Now I am trying to drop this constraint But it fails again and shows me same error as it was giving when i was trying to create foreign key. alter table `vts`.`tblguardian` drop index `FK_tblguardian` Primary Key table CREATE TABLE `tblpickpoint` ( `PickPointId` int(4) NOT NULL auto_increment, `PickPointName` varchar(500) default NULL, `PickPointLabel` varchar(500) default NULL, `PickPointLatLong` varchar(100) NOT NULL, PRIMARY KEY (`PickPointId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC Foreign Key Table CREATE TABLE `tblguardian` ( `GuardianId` int(4) NOT NULL auto_increment, `GuardianName` varchar(500) default NULL, `GuardianAddress` varchar(500) default NULL, `GuardianMobilePrimary` varchar(15) NOT NULL, `GuardianMobileSecondary` varchar(15) default NULL, `GuardianPickPointId` int(4) default NULL, PRIMARY KEY (`GuardianId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1

    Read the article

  • optional search parameters in sql query and rows with null values

    - by glenn.danthi
    Ok here is my problem : Before i start the description, let me to tell you that I have googled up a lot and I am posting this question for a good optimal solution :) i am building a rest service on WCF to get userProfiles... the user can filter userProfiles by giving something like userProfiles?location=London now i have the following method GetUserProfiles(string firstname, string lastname, string age, string location) the sql query string i built is: select firstname, lastname, .... from profiles where (firstName like '%{firstname}%') AND (lastName like '%{lastName}%') ....and so on with all variables being replaced by string formatter. Problem with this is that it filters any row having firstname, lastname, age or location having a null value.... doing something like (firstName like '%{firstName}%' OR firstName IS NULL) would be tedious and the statement would become unmaintanable! (in this example there are only 4 arguments, but in my actual method there are 10) What would be the best solution for this?....How is this situation usually handled? Database used : MySql

    Read the article

  • How to quickly analyse large MDB file

    - by Craig Johnston
    I need to know how to quickly analyse a large MDB file (about 1GB) to see which tables are causing it to be so big. Is there something will easily allow me to show a breakdown of which tables are responsible for how much data. I need to know whether it is just this one customer who is using the application differently, or whether there is genuinely a lot of data in the MDB. This MDB is currently causing the VB app to crash, and I need to know why it is so big so that I can maybe think about how to put some of the data into another 'archival' MDB. Migrating to SQL Server is not an option, unless the use of linked tables from an MDB is a realistic option.

    Read the article

  • PostgreSQL String search for partial patterns removing exrtaneous characters

    - by tbrandao
    Looking for a simple SQL (PostgreSQL) regular expression or similar solution (maybe soundex) that will allow a flexible search. So that dashes, spaces and such are omitted during the search. As part of the search and only the raw characters are searched in the table.: Currently using: SELECT * FROM Productions WHERE part_no ~* '%search_term%' If user types UTR-1 it fails to bring up UTR1 or UTR 1 stored in the database. But the matches do not happen when a part_no has a dash and the user omits this character (or vice versa) EXAMPLE search for part UTR-1 should find all matches below. UTR1 UTR --1 UTR 1 any suggestions...

    Read the article

  • gridview check duplicates not using sql

    - by Tomasusa
    I have a code: foreach (GridViewRow dr in gvCategories.Rows)<br/> { <br/> if (dr.Cells[0].Text == txtEnterCategory.Text.Trim())<br/> <br/> isError=true; <br/> <br/> } Debugging: dr.Cells[0].Text is always "", even there are records. How to use a loop to check each row to find if a record exists in the gridview not using sql?

    Read the article

  • mysql 2 primary key onone table

    - by Bharanikumar
    CREATE TABLE Orders -> ( -> ID SMALLINT UNSIGNED NOT NULL, -> ModelID SMALLINT UNSIGNED NOT NULL, -> Descrip VARCHAR(40), -> PRIMARY KEY (ID, ModelID) -> ); Basically May i know ... Shall we create the two primary key on one table... Is it correct... Bcoz as per sql law,,, We can create N number of unque key in one table, and only one primary key only is the LAW know... Then how can my system allowing to create multiple primary key ? Please advise .... what is the general rule

    Read the article

  • nHibernate: Query tree nodes where self or ancestor matches condition

    - by Famous Nerd
    I have see a lot of competing theories about hierarchical queries in fluent-nHibernate or even basic nHibernate and how they're a difficult beast. Does anyone have any knowledge of good resources on the subject. I find myself needing to do queries similar to: (using a file system analog) select folderObjects from folders where folder.Permissions includes :myPermissionLevel or [any of my ancestors] includes :myPermissionLevel This is a one to many tree, no node has multiple parents. I'm not sure how to describe this in nHibernate specific terms or, even sql-terms. I've seen the phrase "nested sets" mentioned, is this applicable? I'm not sure. Can anyone offer any advice on approaches to writing this sort of nHibernate query?

    Read the article

  • MySQL - Sort on a calculated value based on two dates

    - by Petter Magnusson
    I have the following problem that needs to be solved in a MySQL query: Fields info - textfield date1 - a date field date2 - a date field offset1 - a text field with a number in the first two positions, example "10-High" offset2 - a text field with a number in the first two positions, example "10-High" I need to sort the records by the calculated "sortvalue" based on the current date (today): If today=date2 then sortvalue=offset1*10+offset2*5+1000 else sortvalue=offset1*10+offset2*5 I have quite good understanding of basic SQL with joins etc, but this I am not even sure if its possible...if it helps I could perhaps live with a single formula giving the same sort of effect as the IFs do....ie. before date1 = low value, after date2 = high value... Rgds PM

    Read the article

  • Spooling data to CSV truncates

    - by Steve
    Hi, I am using the below script to output data to a csv file: set heading off set linesize 10000 set pagesize 0 set echo off set verify off spool D:\OVERNIGHT\TEMP_FILES\PFRA_DETAIL_VIXEN_OUTPUT.txt SELECT TRIM(T4.S_ORG_ID)||','|| TRIM(T4.NAME)||','|| TRIM(T3.CREATION_TIME)||','|| TRIM(T5.X_HOUSE_NUMBER)||','|| TRIM(T5.X_FLAT_NUMBER)||','|| TRIM(T5.ADDRESS)||','|| TRIM(T5.CITY)||','|| TRIM(T5.ZIPCODE)||','|| TRIM(T3.NOTES) FROM TABLE_CASE T1 INNER JOIN TABLE_QUEUE T2 ON T1.CASE_CURRQ2QUEUE = T2.OBJID INNER JOIN TABLE_PHONE_LOG T3 ON T1.OBJID = T3.CASE_PHONE2CASE INNER JOIN TABLE_BUS_ORG T4 ON T1.X_CASE2X_BUS_ORG = T4.OBJID INNER JOIN TABLE_ADDRESS T5 ON T1.CASE2ADDRESS = T5.OBJID WHERE case_currq2queue IN(422); / spool off; exit; However the data is being truncated to 80 characters. The t3.notes field is in CLOB format. Does anyone know how I can spool this out to csv? I only have access to SQL*Plus. Thanks in advance, Steve

    Read the article

  • SQLBulkCopy used in conjunction with Transaction and firing an event each time a batch is copied

    - by Hans Rudel
    Im currently uploading data to MS SQL server via SQLBulkCopy and Transactions. I would like to be able to raise an event after each batch has been uploaded (I have already tried SQLRowsCopied event and it doesnt work, see quote below) MSDN quote: No action, such as transaction activity, is supported in the connection during the execution of the bulk copy operation, and it is recommended that you not use the same connection used during the SqlRowsCopied event. However, you can open a different connection. http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.sqlrowscopied(v=vs.80).aspx So i basically cant have my cake and eat it :( Does anyone know a solution around this as i would like to fire an event after each batch has been uploaded. Thanks for your help.

    Read the article

  • free public databases with non-trivial table structures?

    - by Caffeine Coma
    I'm looking for some sample database data that I can use for testing and demonstrating a DB tool I am working on. I need a DB that has (preferably) many tables, and many foreign key relationships between the tables. Ideally the data would be in SQL dump format, or at least in something that maintains the foreign key references, and could be easily import into an RDBMS (MySQL or H2). The dataset itself doesn't have to be huge (in fact, best if it's not). I thought about using the Stackoverflow Data Dump, but it's only about 5 tables.

    Read the article

  • Use Google AppEngine datastore outside of AppEngine project

    - by Holtwick
    For my little framework Pyxer I would like to to be able to use the Google AppEngine datastores also outside of AppEngine projects, because I'm now used to this ORM pattern and for little quick hacks this is nice. I can not use Google AppEngine for all of my projects because of its's limitations in file size and number of files. A great alternative would also be, if there was a project that provides an ORM with the same naming as the AppEngine datastore. I also like the GQL approach very much, since this is a nice combination of ORM and SQL patterns. Any ideas where or how I might find such a solution? Thanks.

    Read the article

  • Fastest way to calculate summary of database field

    - by Jo-wen
    I have a ms-sql table with the following structure: Name nvarchar; Sign nvarchar; Value int example contents: Test1, 'plus', 5 Test1, 'minus', 3 Test2, 'minus', 1 I would like to have totals per "Name". (add when sign = plus, subtract when sign = minus) result: Test1, 2 Test2, -1 I want to show these results (and update them when a new record is added)... and I'm looking for the fastest solution! [sproc? fast-forward cursor? calculate in .net?]

    Read the article

< Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >