Search Results

Search found 62701 results on 2509 pages for 'sql function'.

Page 723/2509 | < Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >

  • jdbc query - date ranges as parameters

    - by pstanton
    Hi all, I'd like to write a single JDBC statement that can handle the equivalent of any number of NOT BETWEEN date1 AND date2 where clauses. By single query, i mean that the same SQL string will be used to create the JDBC statements and then have different parameters supplied. This is so that the underlying frameworks can efficiently cache the query (i've been stung by that before). Essentially, I'd like to find a query that is the equivalent of SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? and at the same time could be used with fewer parameters: SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? or more parameters SELECT * FROM table WHERE mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? AND mydate NOT BETWEEN ? AND ? I will consider using a temporary table if that will be simpler and more efficient. thanks for the help!

    Read the article

  • Windows Mobile : How to bind dropdown's selectedvalue to a column in table A and the list data to a

    - by Rob
    Hi, I am trying to learn the basics of Windows Mobile development against SQL CE and have come across a basic problem. I have two tables. One called Customers that stores customer info and has an identity column called ID as the primary key. The other table is called Orders which has a column called CustomerID (the FK constraint is present). I have added a DataSet to the project that contains both tables and have autogenerated the edit/view forms. This has produced a text control for the CustomerID column in the Order table for the new/edit form and I deleted it and replaced it with a dropdown list. Then, using the 'Advanced' databinding options (in Properties) I set the datasource of the list to the Customers table setting the value to the ID field and the text to the CustomerName field. I then set the SelectedValue of the list box to the CustomerID field of the Orders dataset. So far so good. When I run the app in the emulator and view the 'New' form for Orders the Customer dropdown is indeed populated with a list of customer names and I can select one and happily create a new order successfully. This is confirmed when I see the order appear in the Orders Grid form. However, when I then click on the order in the grid and then select 'Edit' the order loads but the dropdown always shows the first customer in the list and doesn't seem to bind the SelectedValue to the Orders dataset CustomerID field. Now I am an ASP.NET guy and normally hand craft the DAL and it's binding to the UI so I'm not entirely sure where to look to investigate what is going wrong here as this is all generated code. I am sure it is something very trivial but any pointers would be appreciated. My gut feeling is that the SelectedValue and the Customers.CustomerID values do not match for some reason? Many thanks, Rob.

    Read the article

  • Is there anyway to carry a value in php forward to a second page?

    - by Henry Aspden
    I have created a php site, and previously it was listing only products with defined values. I have now changed it to include an array of products for example all products WHERE id = "spotlights" and this works great so it means I can add new products just to the database, but I still have to add the second page manually. e.g going from the product div on the main page, through to www.example.com/spotlight_1.php Is there anyway in PHP to carry the data from my index.php e.g. the ID through to the next page? so that I can have a template product.php page, and I can use a database pull to echo the product information required. So on index.php i click on the product with ID="1" and on the product.php page, it loads the relevant data for product 1. I can write the php SQL/mySQL calls myself, its just the way to carry accross a value from the previous page which I dont understand Regards Henry p.s. all the IDs and things are stored in the database already as 1 to 3digit values e.g. 3 or or 93 or 254 Any advice as always is greatly appreciated Regards Henry

    Read the article

  • Insert a datetime value with GetDate() function to a SQL server (2005) table?

    - by David.Chu.ca
    I am working (or fixing bugs) on an application which was developed in VS 2005 C#. The application saves data to a SQL server 2005. One of insert SQL statement tries to insert a time-stamp value to a field with GetDate() TSQL function as date time value. Insert into table1 (field1, ... fieldDt) values ('value1', ... GetDate()); The reason to use GetDate() function is that the SQL server may be at a remove site, and the date time may be in a difference time zone. Therefore, GetDate() will always get a date from the server. As the function can be verified in SQL Management Studio, this is what I get: SELECT GetDate(), LEN(GetDate()); -- 2010-06-10 14:04:48.293 19 One thing I realize is that the length is not up to the milliseconds, i.e., 19 is actually for '2010-06-10 14:04:48'. Anyway, the issue I have right now is that after the insert, the fieldDt actually has a date time value up to minutes, for example, '2010-06-10 14:04:00'. I am not sure why. I don't have permission to update or change the table with a trigger to update the field. My question is that how I can use a INSERT T-SQL to add a new row with a date time value ( SQL server's local date time) with a precision up to milliseconds?

    Read the article

  • Best practices for querying an entire row in a database table? (MySQL / CodeIgniter)

    - by Walker
    Sorry for the novice question! I have a table called cities in which I have fields called id, name, xpos, ypos. I'm trying to use the data from each row to set a div's position and name. What I'm wondering is what's the best practice for dynamically querying an unknown amount of rows (I don't know how many cities there might be, I want to pull the information from all of them) and then passing the variables from the model into the view and then setting attributes with it? Right now I've 'hacked' a solution where I run a different function each time which pulls a value using a query ('SELECT id FROM cities;'), then I store that in a global array variable and pass it into view. I do this for each var so I have arrays called: city_idVar, city_nameVar, city_xposVar, city_yposVar then I know that the city_nameVar[0] matches up with city_xposVar[0] etc. Is there a better way?

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • Query column and everything subordinate (hard to describe, non native speaker, PLS let me explain)

    - by MAD9
    A few weeks ago, I asked a question about how to generate hierarchical XML from a table, that has a parentID column. It all works fine. The point is, according to the hierarchy, I also want to query a table. I'll give you an example: Thats the table with the codes: ID CODE NAME PARENTID 1 ROOT IndustryCode NULL 2 IND Industry 1 3 CON Consulting 1 4 FIN Finance 1 5 PHARM Pharmaceuticals 2 6 AUTO Automotive 2 7 STRAT Strategy 3 8 IMPL Implementation 3 9 CFIN Corporate Finance 4 10 CMRKT Capital Markets 9 From which I generate (for displaying in a TreeViewControl) this XML: <record key="1" parentkey="" Code="ROOT" Name="IndustryCode"> <record key="2" parentkey="1" Code="IND" Name="Industry"> <record key="5" parentkey="2" Code="PHARM" Name="Pharmaceuticals" /> <record key="6" parentkey="2" Code="AUTO" Name="Automotive" /> </record> <record key="3" parentkey="1" Code="CON" Name="Consulting"> <record key="7" parentkey="3" Code="STRAT" Name="Strategy" /> <record key="8" parentkey="3" Code="IMPL" Name="Implementation" /> </record> <record key="4" parentkey="1" Code="FIN" Name="Finance"> <record key="9" parentkey="4" Code="CFIN" Name="Corporate Finance"> <record key="10" parentkey="9" Code="CMRKT" Name="Capital Markets" /> </record> </record> </record> As you can see, some codes are subordinate to others, for example AUTO << IND << ROOT What I want (and have absolutely no idea how to realise or even, where to start) is to be able to query another table (where one column is this certain code of course) for a code and get all records with the specific code and all subordinate codes For example: I query the other table for "IndustryCode = IND[ustry]" and get (of course) the records containing "IND", but also AUTO[motive] and PHARM[aceutical] (= all subordinates) Its an SQL Express Server 2008 with Advanced Services.

    Read the article

  • rails: date type and GetDate

    - by cbrulak
    This is a follow up to this question: http://stackoverflow.com/questions/2930256/unique-responses-rails-gem I'm going to create an index based on the user id, url and a date type. I want date type (not datetime type) because I want the day, the 24 hour day to be part of the index to avoid duplication of page views counts on the same day. In other words: A view only counts once in a day by a visitor. I also want the default value of that column (viewdate) to be the function GETDATE(). This is what I have in my migration: execute "ALTER TABLEpage_viewsADD COLUMN viewdate datetime DEFAULTGETDATE()`" But the value viewdate is always empty. What am I missing? (as an aside, any other suggestions for accomplishing this goal?)

    Read the article

  • Why am I getting "Enter Parameter Value" when running my MS Access query?

    - by DanM
    In my query, I use the IIF function to assign either "Before" or "After" to a field named BeforeOrAfter using AS. When I run this query, however, the "Enter Parameter Value" dialog appears, requesting a value for BeforeOrAfter. If I remove BeforeOrAfter DESC from the ORDER BY clause, I don't get the dialog. Here is the offending query: SELECT d.Scenario, e.Event, IIF(d.LogTime < e.Time, 'Before','After') AS BeforeOrAfter, d.HeartRate FROM Data d INNER JOIN Events e ON d.Scenario = e.Scenario WHERE e.Include = Yes ORDER BY d.Scenario, e.Id, BeforeOrAfter DESC Question: Why is my AS BeforeOrAfter not being recognized by the ORDER BY clause? Why does it ask me to enter a parameter value for "BeforeOrAfter" when I run this query? Note: I tried using brackets, single quotes, double quotes, etc., but none of that made any difference.

    Read the article

  • ora-00939 error in reporting services, SSRS

    - by san
    Hi, I have an SSRS report , Oracle is my backend and am using this following query for dataset of my second parameter. select distinct X from v_stf_sec_user_staffing_center usc where usc.center_group_id in ( select distinct center_group_id from V_T_STAFFING_CENTER_GROUP scg where INSTR(','||REPLACE(:PI_REGION_LIST,' ')||',', ','||scg.group_abbreviation||',') 0) and usc.nt_user_name=:PI_NT_USER_NAME Here PI_REGION_LIST is a multivalued parameter of string type. and PI_NT_USER_NAME is a default string valued parameter this query works fine when i try to execute in manulally in the Data tab , also in the Oracle tool. But when i run the report in SSRS and select more than 3 values for the parameter PI_REGION_LIST the report throws an error on this dataset, ora-00939 error,too many arguments for function. I am not able to figure out the error here. Please help me with an idea. Thanks in advance, Suni.

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • Is saving to database just to get an ID a bad hack?

    - by Narsil
    I hope the title is not too confusing. I am trying to make folders with linq-to-sql objects' IDs. Actually I have to create folders before I should save them. I will use them to keep user uploaded files. As you can see I have to create the folder with the FileID before I can save it there. So I just save a record which will be edited or maybe deleted File newFile = new File(); ...//add some values to fields so they don't throw rule violations db.AddFile(newFile); db.Save(); System.IO.Directory.CreateDirectory("..Uploads/"+newFile.FileId.ToString()); After that I will have to edit some fields and save again. Of course user might stop upload and I would have to delete it. I know I can write a stored procedure to get the next available FileID but some other upload happening at the same time would get the same number. So they would write in same directory which is a thing I don't want. Should I go on with this, would there be some problems? Can you think of a better way?

    Read the article

  • How can I visualise a "broken" hierarchical dataset?

    I have a reasonably large datatable structured something like this: StaffNo Grade Direct Boss2 Boss3 Boss4 Boss5 Boss6 ------- ----- ----- ----- ----- ----- ----- ----- 10001 1 10002 10002 10057 10094 10043 10099 10002 2 10057 NULL 10057 10094 10043 10099 10003 1 10004 10004 10057 10094 10043 10099 10004 2 10057 NULL 10057 10094 10043 10099 10057 3 10094 NULL NULL 10094 10043 10099 etc.... i.e. a unique id , their level (grade) in the hierarchy, a record of their bosses ID and the IDs of the supervisors above. (The 2,3,4, etc refers to the boss at that particular grade). The system relies on a strict hierarchy - if you are my boss (/parent) then your boss must be my grandparent. Unfortunately this rule is not enforced within the data model and the data ultimately comes from other systems which don't even know about the rule, let alone observe it. So you and I may share the same boss, but our bosses boss won't be the same. note: I cannot change the data model I cannot fix the data at source. So (for the moment) I have to fix the data once it's in place. Once a fortnight someone will do something which breaks the model and I'll need to modify the procs slightly to resolve. Not ideal, but I'm stuck with this for the next six months. Anyway, specific queries are easy to produce but I find it hard to keep track of the bigger picutre. The application which sits on this runs without complaint regardless but navigating around the system becoming extraordinarily confusing. So my question is: Can anyone recommend a tool (or technique) for generating some kind of "broken tree" diagram in this sort of circumstances? I don't want something that will fix things for me, or attempt statistical analysis but at least something that will give a visual indication of how broken it is at any one time. Note : At the moment this is in a SQL Server database but I'm open to ideas utilising C#, Perl or Python.

    Read the article

  • TransactionScope question - how can I keep the DTC from getting involved in this?

    - by larryq
    (I know the circumstances surrounding the DTC and promoting a transaction can be a bit mysterious to those of us not in the know, but let me show you how my company is doing things, and if you can tell me why the DTC is getting involved, and if possible, what I can do to avoid it, I'd be grateful.) I have code running on an ASP.Net webserver. We have one database, SQL 2008. Our data access code looks something like this-- We have a data access layer that uses a wrapper object for SQLConnections and SQLCommands. Typical use looks like this: void method1() { objDataObject = new DataAccessLayer(); objDataObject.Connection = SomeConnectionMethod(); SqlCommand objCommand = DataAccessUtils.CreateCommand(SomeStoredProc); //create some SqlParameters, add them to the objCommand, etc.... objDataObject.BeginTransaction(IsolationLevel.ReadCommitted); objDataObject.ExecuteNonQuery(objCommand); objDataObject.CommitTransaction(); objDataObject.CloseConnection(); } So indeed, a very thin wrapper around SqlClient, SqlConnection etc. I want to run several stored procs in a transaction, and the wrapper class doesn't allow me access to the SqlTransaction so I can't pass it from one component to the next. This led me to use a TransactionScope: using (TransactionScope tx1 = new TransactionScope(TransactionScope.RequiresNew)) { method1(); method2(); method3(); tx1.Complete(); } When I do this, the DTC gets involved, and unfortunately our webservers don't have "allow remote clients" enabled in the MSDTC settings-- getting IT to allow that will be a fight. I'd love to avoid DTC becoming involved but can I do it? Can I leave out the transactional calls in methods1-3() and just let the TransactionScope figure it all out?

    Read the article

  • Foreign/accented characters in sql query

    - by FromCanada
    I'm using Java and Spring's JdbcTemplate class to build an SQL query in Java that queries a Postgres database. However, I'm having trouble executing queries that contain foreign/accented characters. For example the (trimmed) code: JdbcTemplate select = new JdbcTemplate( postgresDatabase ); String query = "SELECT id FROM province WHERE name = 'Ontario';"; Integer id = select.queryForObject( query, Integer.class ); will retrieve the province id, but if instead I did name = 'Québec' then the query fails to return any results (this value is in the database so the problem isn't that it's missing). I believe the source of the problem is that the database I am required to use has the default client encoding set to SQL_ASCII, which according to this prevents automatic character set conversions. (The Java environments encoding is set to 'UTF-8' while I'm told the database uses 'LATIN1' / 'ISO-8859-1') I was able to manually indicate the encoding when the resultSets contained values with foreign characters as a solution to a previous problem with a similar nature. Ex: String provinceName = new String ( resultSet.getBytes( "name" ), "ISO-8859-1" ); But now that the foreign characters are part of the query itself this approach hasn't been successful. (I suppose since the query has to be saved in a String before being executed anyway, breaking it down into bytes and then changing the encoding only muddles the characters further.) Is there a way around this without having to change the properties of the database or reconstruct it? PostScript: I found this function on StackOverflow when making up a title, it didn't seem to work (I might not have used it correctly, but even if it did work it doesn't seem like it could be the best solution.):

    Read the article

  • PostgreSQL - best way to return an array of key-value pairs

    - by Matt W
    I'm trying to select a number of fields, one of which needs to be an array with each element of the array containing two values. Each array item needs to contain a name (character varying) and an ID (numeric). I know how to return an array of single values (using the ARRAY keyword) but I'm unsure of how to return an array of an object which in itself contains two values. The query is something like SELECT t.field1, t.field2, ARRAY(--with each element containing two values i.e. {'TheName', 1 }) FROM MyTable t I read that one way to do this is by selecting the values into a type and then creating an array of that type. Problem is, the rest of the function is already returning a type (which means I would then have nested types - is that OK? If so, how would you read this data back in application code - i.e. with a .Net data provider like NPGSQL?) Any help is much appreciated.

    Read the article

  • retrieving object information with Doctrine

    - by ajsie
    i want to fetch information from the database using objects. i really like this approach cause this is more OOP: $user = Doctrine_Core::getTable('User')->find(1); echo $user->Email['address']; echo $user->Phonenumbers[0]->phonenumber; rather than: $q = Doctrine_Query::create() ->from('User u') ->leftJoin('u.Email e') ->leftJoin('u.Phonenumbers p') ->where('u.id = ?', 1); $user = $q->fetchOne(); echo $user->Email['address']; echo $user->Phonenumbers[0]['phonenumber']; the problem is that the first one uses 3 queries (3 different tables), while the second one uses only 1 (and is therefore recommended technique). but i feel that it destroys the object oriented design. cause ORM is meant to give us an OOP approach so that we could focus on objects and not the relational database. but now they want us to go back to use SQL like pattern. there isn't a way to get information form multiple tables not using DQL? the above examples are taken from the documentation: doctrine

    Read the article

  • How do I join three tables with SQLalchemy and keeping all of the columns in one of the tables?

    - by jimka
    So, I have three tables: The class defenitions: engine = create_engine('sqlite://test.db', echo=False) SQLSession = sessionmaker(bind=engine) Base = declarative_base() class Channel(Base): __tablename__ = 'channel' id = Column(Integer, primary_key = True) title = Column(String) description = Column(String) link = Column(String) pubDate = Column(DateTime) class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key = True) username = Column(String) password = Column(String) sessionId = Column(String) class Subscription(Base): __tablename__ = 'subscription' userId = Column(Integer, ForeignKey('user.id'), primary_key=True) channelId = Column(Integer, ForeignKey('channel.id'), primary_key=True) And the SQL commands that are executed to create them: CREATE TABLE subscription ( "userId" INTEGER NOT NULL, "channelId" INTEGER NOT NULL, PRIMARY KEY ("userId", "channelId"), FOREIGN KEY("userId") REFERENCES user (id), FOREIGN KEY("channelId") REFERENCES channel (id) ); CREATE TABLE user ( id INTEGER NOT NULL, username VARCHAR, password VARCHAR, "sessionId" VARCHAR, PRIMARY KEY (id) ); CREATE TABLE channel ( id INTEGER NOT NULL, title VARCHAR, description VARCHAR, link VARCHAR, "pubDate" TIMESTAMP, PRIMARY KEY (id) ); NOTE: I know user.username should be unique, need to fix that, and I'm not sure why SQLalchemy creates some row names with the double-quotes. And I'm trying to come up with a way to retrieve all of the channels, as well as an indication on what channels one particular user (identified by user.sessionId together with user.id) has a subscription on. For example, say we have four channels: channel1, channel2, channel3, channel4; a user: user1; who has a subscription on channel1 and channel4. The query for user1 would return something like: channel.id | channel.title | subscribed --------------------------------------- 1 channel1 True 2 channel2 False 3 channel3 False 4 channel4 True This is a best-case result, but since I have absolutely no clue as how to accomplish the subscribed column, I've been instead trying to get the particular users id in the rows where the user has a subscription and where a subscription is missing, just leave it blank. The database engine that I'm using together with SQLalchemy atm. is sqlite3 I've been scratching my head over this for two days now, I've no problem joining together all three by way of the subscription table but then all of the channels where the user does not have a subscription gets omitted. I hope I've managed to describe my problem sufficiently, thanks in advance.

    Read the article

  • Display a ranking grid for game : optimization of left outer join and find a player

    - by Jerome C.
    Hello, I want to do a ranking grid. I have a table with different values indexed by a key: Table SimpleValue : key varchar, value int, playerId int I have a player which have several SimpleValue. Table Player: id int, nickname varchar Now imagine these records: SimpleValue: Key value playerId for 1 1 int 2 1 agi 2 1 lvl 5 1 for 6 2 int 3 2 agi 1 2 lvl 4 2 Player: id nickname 1 Bob 2 John I want to display a rank of these players on various SimpleValue. Something like: nickname for lvl Bob 1 5 John 6 4 For the moment I generate an sql query based on which SimpleValue key you want to display and on which SimpleValue key you want to order players. eg: I want to display 'lvl' and 'for' of each player and order them on the 'lvl' The generated query is: SELECT p.nickname as nickname, v1.value as lvl, v2.value as for FROM Player p LEFT OUTER JOIN SimpleValue v1 ON p.id=v1.playerId and v1.key = 'lvl' LEFT OUTER JOIN SimpleValue v2 ON p.id=v2.playerId and v2.key = 'for' ORDER BY v1.value This query runs perfectly. BUT if I want to display 10 different values, it generates 10 'left outer join'. Is there a way to simplify this query ? I've got a second question: Is there a way to display a portion of this ranking. Imagine I've 1000 players and I want to display TOP 10, I use the LIMIT keyword. Now I want to display the rank of the player Bob which is 326/1000 and I want to display 5 rank player above and below (so from 321 to 331 position). How can I achieve it ? thanks.

    Read the article

  • Stored procedure woes ... inserting binary ...

    - by Wardy
    Ok so I have this storedproc in my SQL 2008 database (works in 2005 too / used to) ... CREATE PROCEDURE [dbo].[SetBinaryContent] @Ref nvarchar(50), @Content varbinary(MAX), @ObjectID uniqueidentifier AS BEGIN DELETE ObjectContent WHERE ObjectId = @ObjectID AND Ref = @Ref IF DATALENGTH(@Content) > 5 BEGIN INSERT INTO ObjectContent (Ref,BinaryContent,ObjectId) VALUES (@Ref,@Content,@ObjectId) END UPDATE Objects SET [Status] = 1 WHERE ID = @ObjectID END Relatively simple, I take a byte array in C# and chuck it in @Content i then give it a guid and string for the other params and off we go. ... Great, it used to work ... but it don't anymore ... so erm ... What's wrong with this stored proc? I've stepped through my C# code thinking I screwed up somehow in that but it definately adds the params and gives them the correct values so what would cause the server to just stop executing this storedproc correctly? When called this proc executes but nothing changes in the db ... no new records are added to the ObjectContent table. Weird huh ...

    Read the article

  • Tentative date casting in tsql

    - by Tewr
    I am looking for something like TRYCAST in TSQL or an equivalent method / hack. In my case I am extracting some date data from an xml column. The following query throws "Arithmetic overflow error converting expression to data type datetime." if the piece of data found in the xml cannot be converted to datetime (in this specific case, the date is "0001-01-01" in some cases). Is there a way to detect this exception before it occurs? select [CustomerInfo].value('(//*:InceptionDate/text())[1]', 'datetime') FROM Customers An example of what I am trying to achieve in pseudocode with an imagined tsql function TRYCAST(expr, totype, defaultvalue): select TRYCAST( [CustomerInfo].value('(//*:InceptionDate/text())[1]', 'nvarchar(100)'), datetime, null) FROM Customers

    Read the article

  • Type or namespace could not be found

    - by Jason Shultz
    I'm using LINQ to SQL to connect my database to my home page. I created my datacontext (named businessModel.dbml) In it I have two tables named Category and Business. In home controller I reference the model and attempt to return to the view the table: var dataContext = new businessModelDataContext(); var business = from b in dataContext.Businesses select b; ViewData["WelcomeMessage"] = "Welcome to Jerome, Arizona!"; ViewData["MottoMessage"] = "Largest Ghost Town in America!"; return View(business); and in the view I have this: <%@ Import Namespace="WelcomeToJerome.Models" %> and <% foreach (business b in (IEnumerable)ViewData.Model) { %> <li><%= b.Title %></li> <% } %> Yet, in the view business is cursed with the red underline and say's that The type or namespace name 'business' could not be found (are you missing a using directive or an assembly reference?) What am I doing wrong? This has had me stumped all afternoon. link to all the code in pastebin: http://pastebin.com/es4RnS2q

    Read the article

  • Using conditionals in Linq Programatically

    - by Mike B
    I was just reading a recent question on using conditionals in Linq and it reminded me of an issue I have not been able to resolve. When building Linq to SQL queries programatically how can this be done when the number of conditionals is not known until runtime? For instance in the code below the first clause creates an IQueryable that, if executed, would select all the tasks (called issues) in the database, the 2nd clause will refine that to just issues assigned to one department if one has been selected in a combobox (Which has it's selected item bound to the departmentToShow property). How could I do this using the selectedItems collection instead? IQueryable<Issue> issuesQuery; // Will select all tasks issuesQuery = from i in db.Issues orderby i.IssDueDate, i.IssUrgency select i; // Filters out all other Departments if one is selected if (departmentToShow != "All") { issuesQuery = from i in issuesQuery where i.IssDepartment == departmentToShow select i; } By the way, the above code is simplified, in the actual code there are about a dozen clauses that refine the query based on the users search and filter settings.

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

< Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >