Search Results

Search found 28052 results on 1123 pages for 't sql tuesday'.

Page 547/1123 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • A simple group-by (no count) in Django

    - by Daniel Quinn
    If this were raw-SQL, it'd be a no-brainer, but in Django, this is proving to be quite difficult to find. What I want is this really: SELECT user_id FROM django_comments WHERE content_type_id = ? AND object_pk = ? GROUP BY user_id It's those last two lines that're the problem. I'd like to do this the "Django-way" but the only thing I've found is mention of aggregates and annotations, which I don't think solve this issue... do they? If someone could explain this to me, I'd really appreciate it.

    Read the article

  • Multi join query returns to many results and improperly matched

    - by Woot4Moo
    I have the following minimal schema in Oracle: http://sqlfiddle.com/#!4/c1ed0/14 The queries I have run yield too many results and this query: select cat.*, status.*, source.* from cats cat, status status, source source Left OUTER JOIN source source2 on source2.sourceid = 1 Right OUTER JOIN status status2 on status2.isStray =0 order by cat.name will yield incorrect results. What I am expecting is a table that looks like the following however I cannot seem to come up with the correct SQL. NAME AGE LENGTH STATUSID CATSOURCE ISSTRAY SOURCEID CATID Adam 1 25 null null null 1 2 Bill 5 1 null null null null null Charles 7 5 null null null null null Steve 12 15 1 1 1 1 1 In plain English what I am looking for is to return all known cats + their associated cat source + their cat status while retaining null values. The only information I will have is the source that I am curious about. I also only want the cats that have a status of either STRAY or UNKNOWN (null)

    Read the article

  • Column.DbType affecting runtime behavior

    - by leppie
    Hi According to the MSDN docs, the DbType property/attribute of a Column type/element is only used for database creation. Yet, today, when trying to submit data to an IMAGE column on a SQLCE database (not sure if only on CE), I got an exception of 'Data truncated to 8000 bytes'. This was due to the DbType still being defined as VARBINARY(MAX) which SQLCE does not support. Changing the type to IMAGE in the DbType fixes the issue. So what other surprises does Linq2SQL attributes hold in store? Is this a bug or intended? Should I report it to MS? UPDATE After getting the answer from Guffa, I tested it, but it seems for NVARCHAR(10) adding a 11 char length string causes a SQL exception, and not Linq2SQL one. The data was truncated while converting from one data type to another. [ Name of function(if known) = ] A first chance exception of type 'System.Data.SqlServerCe.SqlCeException' occurred in System.Data.SqlServerCe.dll

    Read the article

  • Pros and Cons of Access Data Project (MS Access front end with SQL Server Backend)

    - by webworm
    I have been tasked with moving an existing MS Access application (mdb) over to an Access Data Project (adp). Basically the Access forms will remain the same but the data will be migrated over to SQL Server. I am not too familiar with Access Data Projects so I was hoping I could get some opinions on the pros and cons of using them. My first thought was to convert this to a web application or even a Winform application, however I really wanted to perform due dilligence in looking at Access Data Projects before making a decision. Thanks for any assistance.

    Read the article

  • How to properly reserve identity values for usage in a database?

    - by esac
    We have some code in which we need to maintain our own identity (PK) column in SQL. We have a table in which we bulk insert data, but we add data to related tables before the bulk insert is done, thus we can not use an IDENTITY column and find out the value up front. The current code is selecting the MAX value of the field and incrementing it by 1. Although there is a highly unlikely chance that two instances of our application will be running at the same time, it is still not thread-safe (not to mention that it goes to the database everytime). I am using the ADO.net entity model. How would I go about 'reserving' a range of id's to use, and when that range runs out, grab a new block to use, and guarantee that the same range will not be used.

    Read the article

  • Problem with parsing SQL into table variable

    - by Stanley Ross
    I'm using the following code to read a SQL XML Variable into a table variable. I am getting the following error. " Incorrect syntax near '.'. " Can't quite Figure it out DECLARE @LOBS Table ( LineGUID varchar(40) ) DECLARE @lg xml SET @lg = '<?xml version="1.0" encoding="utf-16" standalone="yes"?> <Table> <LOB> <LineGuid>d6e3adad-8c53-4768-91a3-745c0dae0e08</LineGuid> </LOB> <LOB> <LineGuid>4406db8f-0d19-47da-953b-afc1db38b124</LineGuid> </LOB> </Table>' INSERT INTO @LOBS(LineGUID) SELECT ParamValues.ID.value('.','VARCHAR(40)') FROM @lg.nodes('/Table/LOB/LineGuid') AS ParamValues(ID)

    Read the article

  • Design ideas for a versioned db schema with related tables also versioned

    - by vfilby
    Here is the drill, I want to version a database. I have done this before using multiple rows where the table primary key becomes a combination of the row id and either a datestamp or a version #. Now I want to version a table that depends on many other small tables. Versioning each table will be a giant PITA, so I am looking for good options to verion a schema where the data to be versioned spreads over multiple tables. All related tables are properly keyed with foreign key relationships. The database is currently on Sql Server 2005.

    Read the article

  • What is the 'noreq' Filter Type an Alias for?

    - by Alan Storm
    I'm looking in to Magento's filtering options (Ecommerce System and PHP Framekwork with an expansive ORM system). Specifically the addFieldToFilter method. In this method, you specify a SQLish filter by passing in a single element array, with the key indicating the type of filter. For example, array('eq'=>'bar') //eq means equal array('neq'=>'bar') //neq means not equal would each give you a where clause that looks like where field = 'bar'; where field != 'bar'; So, deep in the bowels of the source, I found a comparison type named 'moreq' that maps to a = comparison operator array('moreq'=>'27') where field >= 27 The weird thing is, there's already a 'gteq' comparision type array('gteq'=>'27') where field >= 27 So, my question is, what does moreq stand for? Is is some special SQL concept that's supported in other databases that the Magento guys wants to map to MySQL, or is it just "more required" and an example what happens when you're doing rapid agile and trying to maintain backwards compatibility.

    Read the article

  • Would this prevent the row from being read during the transaction?

    - by acidzombie24
    I remember an example where reads in a transaction then writing back the data is not safe because another transaction may read/write to it in the time between. So i would like to check the date and prevent the row from being modified or read until my transaction is finish. Would this do the trick? and are there any sql variants that this will not work on? update tbl set id=id where date>expire_date and id=@id Note: dateexpire_date happens to be my condition. It could be anything. Would this prevent other transaction from reading the row until i commit or rollback?

    Read the article

  • LinqToSql: How can I create a projection to adhere to DRY?

    - by mhutter
    Just wondering if there is a way to take some of the repitition out of a LINQ to SQL projected type. Example: Table: Address Fields: AddressID, HouseNumber, Street, City, State, Zip, +20 more Class MyAddress: AddressID, HouseNumber, Street (Only 3 fields) LINQ: from a in db.Addresses select new MyAddress { AddressID = a.AddressID, HouseNumber = a.HouseNumber, Street = a.Street } The above query works perfectly, and I understand why something like this will return all 20+ fields in each row: from a in db.Addresses select new MyAddress(a); class MyAddress { public MyAddress(Address a) { this.AddressID = a.AddressID, this.HouseNumber = a.HouseNumber, this.Street = a.Street } } Which leads me to my Question: Is it possible to implement some kind of helper function or extension method to "map" from the LINQ model to MyAddress yet only return the necessary fields in the query result rather than all of the fields?

    Read the article

  • Access 2007: Dynamic SQL to be run when opening a report

    - by blockcipher
    I'm trying to have some SQL execute when I open a report. This works fine when I try to match on a column that's an integer with an integer, but when I try to match on a "text" column, it keeps popping up a dialog asking for what you want to filter on. Here's a somple query: select person_phone_numbers.person_id from person_phone_numbers where phone_number = '444-444-4444' This is actually a sub-query I'm trying to use, but this is where the problem is. If I change it to this it works fine: select person_phone_numbers.person_id from person_phone_numbers where phone_id = 2 I put this in the OnOpen event and I'm assigning it to Me.RecordSource if that makes a difference. My goal here is to have a form accept query parameter(s) and have it open a report with the results. Any thoughts on why it wants to ask for a parameter vs. just running the query the way I have it?

    Read the article

  • entity framework and dirty reads

    - by bryanjonker
    I have Entity Framework (.NET 4.0) going against SQL Server 2008. The database is (theoretically) getting updated during business hours -- delete, then insert, all through a transaction. Practically, it's not going to happen that often. But, I need to make sure I can always read data in the database. The application I'm writing will never do any types of writes to the data -- read-only. If I do a dirty read, I can always access the data; the worst that happens is I get old data (which is acceptable). However, can I tell Entity Framework to always use dirty reads? Are there performance or data integrity issues I need to worry about if I set up EF this way? Or should I take a step back and see about rewriting the process that's doing the delete/insert process?

    Read the article

  • SQL Compact Edition database corruption

    - by jdv
    Hi, Our product is using MS SQL Compact Edition on a Windows machine (laptop). It's basically a metadata index for files we have on the filesystem. Recently we have seen databases getting corrupted. This happens when the machine is very busy moving files around and has to do a tiny bit of database changes at the same time. I was somewhat shocked that was at all possible. It was my expectation that the database would stay coherent whatever the circumstances. Of course we are doing something wrong. Things we have checked so far are: Use of only one db connection per thread specify the maximum size when opening the database The database is accessed only by one application, a .net based windows service. Are there other gotcha's?

    Read the article

  • SHA1 Password returns as cleartext after DB query

    - by Code Sherpa
    Hi. I have a SHA1 password and PasswordSalt in my aspnet_Membership table. but, when I run a query from the server (a Sql Query), the reader reveals that the pass has returned as its cleartext equivalent. I am wondering if my web.config configuration is causing this? <membership defaultProvider="CustomMembershipProvider" userIsOnlineTimeWindow="20" hashAlgorithmType="SHA1"> <providers> <clear/> <add name="CustomMembershipProvider" type="Custom.Utility.CustomMembershipProvider" connectionStringName="MembershipDB" enablePasswordRetrieval="false" enablePasswordReset="true" requiresUniqueEmail="false" requiresQuestionAndAnswer="false" passwordStrengthRegularExpression="" minRequiredPasswordLength="1" minRequiredNonalphanumericCharacters="0" passwordFormat="Hashed" thanks in advance...

    Read the article

  • discovering files in the FileSystem, through SSIS

    - by cometbill
    I have a folder where files are going to be dropped for importing into my data warehouse. \\server\share\loading_area I have the following (inherited) code that uses xp_cmdshell shivers to call out to the command shell to run the DIR command and insert the resulting filenames into a table in SQL Server. I would like to 'go native' and reproduce this functionality in SSIS. Thanks in advance guys and girls. Here's the code USE MyDatabase GO declare @CMD varchar(500) declare @EXTRACT_PATH varchar(255) set @EXTRACT_PATH = '\\server\share\folder\' create table tmp_FILELIST([FILENUM] int identity(1,1), [FNAME] varchar(100), [FILE_STATUS] varchar(20) NULL CONSTRAINT [DF_FILELIST_FILE_STATUS] DEFAULT ('PENDING')) set @CMD = 'dir ' + @EXTRACT_PATH + '*.* /b /on' insert tmp_FILELIST([FNAME]) exec master..xp_cmdshell @CMD --remove the DOS reply when the folder is empty delete tmp_FILELIST where [FNAME] is null or [FNAME] = 'File Not Found' --Remove my administrative and default/common, files not for importing, such as readme.txt delete tmp_FILELIST where [FNAME] is null or [FNAME] = 'readme.txt'

    Read the article

  • SQLCe local db in temp- path in connectionstring?

    - by Petr
    Hi, I have SQL Ce db in my app, which is included in my app directory. While debugging its OK, but when published and run with setup.exe, it retrieves "file not found" in temporary directory the app is ran from. I would like to run from standard location, but I dont know how to change it. I am using this string: SqlCeConnection connection = new SqlCeConnection("Data Source=database.sdf;Persist Security Info=False;"); When I run setup.exe, the app never starts, stating that in its temporary directory the db file was not found. When I run app.exe, it works. I do not understand it...:( EDIT: I can see that in the VS project settings, there is connection string and there is "Data Source=|DataDirectory|\Database.sdf" The path should be something like local directory? Thanks!

    Read the article

  • Help with concept - filters and number of items

    - by dreamer
    Please check http://www.alibaba.com/catalogs/cid/702/Laptops.html they have nice filter here with number of items for each. Note one detail - they have locations here. Same thing on olx.com - location and number of items for each category. Now imagine I have tables: [products] (Id, Name, CategoryId, LocationId) [Categories] (Id,Name) [Location] (Id, Name) My question how can I do the same, cause count things even with caching looks expensive? And they give results pretty fast... Please advice with possible ways to do that in ASP.NET, C#, MVC, MS SQL, but avoice simple answers like "count and change" Thank you in advance.

    Read the article

  • Using multiple databases within one application

    - by Alex
    I have a web application made for several groups of people not connected with each other. Instead of using one database for all of them, I'm thinking about making separate databases. This will improve the speed of the queries and make me free from checking to what group the user belongs. But since I'm working with LINQ to SQL, my classes are explicitly connected with the databases, so I will have to make separate DataContexts for all of the databases. So how can I solve this problem? Or should I just not bother and use one database only?

    Read the article

  • How can I (both) create a row and accessing that row in the same 'Stored Procedure'?

    - by Richard77
    Hello, I'd like to get the value of the id column for an object just after I've created it. But I don't want to run another query for that. My book for beginner (SQL Server 2008 for Dummies) says that there are 2 tables (inserted and deleted) that hold the last row(s) that have been inserted, updated, or deleted. Unfortunately, only Triggers (says the book) can access those tables. But, if I use triggers, they will go off each time I "insert" a row even when I don't need them that functionality. Can I obtain the same effect with a Store Procedure (without having to run a separate query?) This is what I'm trying to do CREATE PROCEDURE myProcedure DECLARE @OrganizationName @ColumnID OUTPUT AS INSERT INTO Organization (OrganizationName) VALUES (@OrganizationName) SET @ColumnID = (// Please, I need Help here ...) Thanks for helping

    Read the article

  • Is there any way to use GUIDs in django?

    - by Jason Baker
    I have a couple of tables that are joined by GUIDs in SQL Server. Now, I've found a few custom fields to add support for GUIDs in django, but I tend to shy away from using code in blog posts if at all possible. I'm not going to do anything with the GUID other than join on it and maybe assign a GUID on new entries (although this is optional). Is there any way to allow this using django's built-in types? Like can I use some kind of char field or binary field and "trick" django into joining using it? If it's any help, I'm using django-pyodbc.

    Read the article

  • MySQL Join issue

    - by mouthpiec
    Hi, I have the following tables: --table sportactivity-- sport_activity_id, home_team_fk, away_team_fk, competition_id_fk, date, time (tuple example) - 1, 33, 41, 5, 2010-04-14, 05:40:00 --table teams-- team_id, team_name (tuple example) - 1, Algeria Now I have the following SQL statment that I use to extract Team A vs Team B SELECT sport_activity_id, T1.team_name AS TeamA, T2.team_name AS TeamB, DATE_FORMAT( DATE, '%d/%m/%Y' ) AS DATE, DATE_FORMAT( TIME, '%H:%i' ) AS TIME FROM sportactivity JOIN teams T1 ON home_team_fk = T1.team_id JOIN teams T2 ON ( away_team_fk = T2.team_id OR away_team_fk = '0' ) WHERE DATE( DATE ) >= CURDATE( ) ORDER BY DATE( DATE ) My problem is that when team B is empty, I am having irrelevant information .... it seems that it is returning all the combinations. I need a query that when team B is equal to 0, (this can occur in my scenario) I get only Team A - Team B (as 0) once.

    Read the article

  • Join unrelated tables through a second level connected table

    - by Andy M
    Hello! I have two tables of activities on a page: Views & Comments. Views id timestamp project_id user_id page_id Comments id timestamp project_id user_id page_id comment Pages id project_id title Now pages are related to projects: Projects id account_id title I am trying to create a summary page that combines views and comments ordered by time (so that the most recent views/comments are at the beginning, grouped by projects. Also, only projects for a specific account. So the result could potentially be: Project 1 View 5 (June 20th) View 4 (June 18th) Comment 5 (June 15th) Comment 4 (June 14th) Comment 3 (June 12th) Project 3 View 3 (June 10th) View 2 (June 8th) Comment 2 (June 7th) Project 2 View 1 (June 5th) Comment 1 (June 4th) If you could help with how to do this using SQL (or even doctrine) that would be awesome. Thank you.

    Read the article

  • Table alias -- Unkown column in field list

    - by Jason
    Hi all, I have a sql query which is executing a LEFT JOIN on 2 tables in which some of the columns are ambiguous. I can prefix the joined tables but when I try to prefix one of the columns from the table in the FROM clause, it tells me Unknown column. I even tried giving that table an alias like so ...From points AS p and using "p" to prefix the tables but that didn't work either. Can someone tell me what I'm doing wrong. Here is my query: SELECT point_title, point_url, address, city, state, zip_code, phone, `points`.`lat`, `points`.`longi`, featured, kmlno, image_url, category.title, category_id, point_id, lat, longi, reviews.star_points, reviews.review_id, count(reviews.point_id) as totals FROM (SELECT *, ( 3959 * acos( cos( radians('37.7717185') ) * cos( radians( lat ) ) * cos( radians( longi ) - radians('-122.4438929') ) + sin( radians('37.7717185') ) * sin( radians( lat ) ) ) ) AS distance FROM points HAVING distance < '25') as distResults LEFT JOIN category USING ( category_id ) LEFT JOIN reviews USING ( point_id ) WHERE (point_title LIKE '%Playgrounds%' OR category.title LIKE '%Playgrounds%') GROUP BY point_id ORDER BY totals DESC, distance LIMIT 0 , 10

    Read the article

  • Trying to modify a constraint in PostgresSQL

    - by MISMajorDeveloperAnyways
    Postgres is getting quite annoying lately. I have checked the documentation provided by Oracle and found a way to do this without dropping the table. Problem is, it errors out at modify as it does not recognize the keyword. Using EMS SQL Manager for PostgreSQL. Alter table public.public_insurer_credit MODIFY CONSTRAINT public_insurer_credit_fk1 deferrable, initially deferred; I was able to work around it by dropping the constraint using : ALTER TABLE "public"."public_insurer_credit" DROP CONSTRAINT "public_insurer_credit_fk1" RESTRICT; ALTER TABLE "public"."public_insurer_credit" ADD CONSTRAINT "public_insurer_credit_fk1" FOREIGN KEY ("branch_id", "order_id", "public_insurer_id") REFERENCES "public"."order_public_insurer"("branch_id", "order_id", "public_insurer_id") ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED;

    Read the article

  • mysqli prepare statment error?

    - by user310850
    Hi all, $mysqli = new mysqli("localhost", "root", "", "test"); $mysqli->query('PREPARE mid FROM "SELECT name FROM test_user WHERE id = ?"'); //$mysqli->query('PREPARE mid FROM "SELECT name FROM test_user" '); $res = $mysqli->query( 'EXECUTE mid 1;') or die(mysqli_error($mysqli)); while($resu = $res->fetch_object()) { echo '<br>' .$resu->name; } Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '1' at line 1 my php version is PHP Version 5.3.0 and mysql mysqlnd 5.0.5-dev - 081106 - $Revision: 1.3.2.27 $

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >