Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 683/1278 | < Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >

  • How do I go about link web content in a database with a nested set model?

    - by wb
    My nested set table is as follows. create table depts ( id int identity(0, 1) primary key , lft int , rgt int , name nvarchar(60) , abbrv nvarchar(20) ); Test departments. insert into depts (lft, rgt, name, abbrv) values (1, 14, 'root', 'r'); insert into depts (lft, rgt, name, abbrv) values (2, 3, 'department 1', 'd1'); insert into depts (lft, rgt, name, abbrv) values (4, 5, 'department 2', 'd2'); insert into depts (lft, rgt, name, abbrv) values (6, 13, 'department 3', 'd3'); insert into depts (lft, rgt, name, abbrv) values (7, 8, 'sub department 3.1', 'd3.1'); insert into depts (lft, rgt, name, abbrv) values (9, 12, 'sub department 3.2', 'd3.2'); insert into depts (lft, rgt, name, abbrv) values (10, 11, 'sub sub department 3.2.1', 'd3.2.1'); My web content table is as follows. create table content ( id int identity(0, 1) , dept_id int , page_name nvarchar(60) , content ntext ); Test content. insert into content (dept_id, page_name, content) values (3, 'index', '<h2>welcome to department 3!</h2>'); insert into content (dept_id, page_name, content) values (4, 'index', '<h2>welcome to department 3.1!</h2>'); insert into content (dept_id, page_name, content) values (6, 'index', '<h2>welcome to department 3.2.1!</h2>'); insert into content (dept_id, page_name, content) values (2, 'what-doing', '<h2>what is department 2 doing?/h2>'); I'm trying to query the correct page content (from the content table) based on the url given. I can easily accomplish this task with a root department. However, querying a department with multiple depths is proving to be a little harder. For example: http://localhost/departments.asp?d3/ (Should return <h2>welcome to department 3!</h2>) http://localhost/departments.asp?d2/what-doing (Should return <h2>what is department 2 doing?</h2>) I'm not sure if this can be create in one query or if there will need to be a recursive function of some sort. Also, if there is nothing after the last / then assume we want the index page. How can this be accomplished? Thank you.

    Read the article

  • Syntax Error in showing Error Description

    - by Sreejesh Kumar
    What is the correct Syntax to be applied for "@[System::ErrorDescription]" inside the query like "INSERT" ? I am unable to retrieve the correct Error Description inside the table, as the result in the table is showing as "@[System::ErrorDescription]". I am not getting the result !

    Read the article

  • Will MySql caching cause performance problems?

    - by Camran
    I am about to upload my website onto a VPS. It is a classifieds website, where all data is stored in MySql and Solr. I wonder if when using MySql:s cache, the server will slow down? Ie, if somebody makes a search for the first time, and MySql is to cache the query, will the caching make the server slower than if it would not cache anything? After the caching is done I know things will improve in terms of performance... But I would like to know if I should even use the cache or not, what do you think? Thanks

    Read the article

  • Returning a recordcount from a subquery in a result set.

    - by KeRiCr
    I am attempting to return a rowcount from a subquery as part of a result set. Here is a sample that I've tried that didn't work: SELECT recordID , GroupIdentifier , count() AS total , (SELECT COUNT() FROM table WHERE intActingAsBoolean = 1) AS Approved FROM table WHERE date_format(Datevalue, '%Y%m%d') BETWEEN 'startDate' AND 'endDate' GROUP BY groupIdentifier What I'm attempting to return for 'Approved' is the number of records for the grouped value where intActingAsBoolean = 1. I have also tried modifying the where clause by giving the main query a table alias and applying an AND clause to match the groupidentifier in the subquery to the main query. None of these are returning the correct results. The query as written returns all records in the table where intActingAsBoolean = 1. This query is being run against a MySQL database.

    Read the article

  • Database Design Question: GUID + Natural Numbers

    - by Alan
    For a database I'm building, I've decided to use natural numbers as the primary key. I'm aware of the advantages that GUID's allow, but looking at the data, the bulk of row's data were GUID keys. I want to generate XML records from the database data, and one problem with natural numbers is that I don't want to expose my database key's to the outside world, and allow users to guess "keys." I believe GUID's solve this problem. So, I think the solution is to generate a sparse, unique iD derived from the natural ID (hopefully it would be 2-way), or just add an extra column in the database and store a guid (or some other multibyte id) The derived value is nicer because there is no storage penalty, but it would be easier to reverse and guess compared to a GUID. I'm (buy) curious as to what others on SO have done, and what insights they have.

    Read the article

  • Beginner Ques::How to delete records permanently in case of linked tables?

    - by Serenity
    Let's say I have these 2 tables QuesType and Ques:- QuesType QuestypeID|QuesType |Active ------------------------------------ 101 |QuesType1 |True 102 |QuesType2 |True 103 |XXInActiveXX |False Ques QuesID|Ques|Answer|QUesTypeID|Active ------------------------------------ 1 |Ques1|Ans1 |101 |True 2 |Ques2|Ans2 |102 |True 3 |Ques3|Ans3 |101 |True In the QuesType Table:- QuesTypeID is a Primary key In the Ques Table:- QuesID is a Primary key and QuesType ID is the Foreign Key that refernces QuesTypeID from QuesType Table Now I am unable to delete records from QuesType Table, I can only make QuesType inactive by setting Active=False. I am unable to delete QuesTypes permanently because of the Foreign key relation it has with Ques Table. So , I just set the column Active=false and those Questypes then don't show on my grid wen its binded. What I want to do is be able to delete any QuesType permamnently. Now it can only be deleted if its not being used anywhere in the Ques table, right? So to delete any QuesType permanently I thot this is what I could do:- In the grid that displays QuesTypes, I have this check box for Active and a button for delete.What I thot was, when a user makes some QuesType inactive then OnCheckChanged() event will run and that will have the code to delete all the Questions in Ques table that are using that QuesTypeID. Then on the QuesType grid, that QuesType would show as Deactivated and only then can a user delete it permanently. Am I thinking correctly? Currently in my DeleteQuesType Stored Procedure what I am doing is:- Setting the Active=false and Setting QuesTye= some string like XXInactiveXX Is there any other way?

    Read the article

  • Union Distinct rows but order them by number of occurrences in mysql

    - by Baversjo
    Hi I have the following query: SELECT o.id,o.name FROM object o WHERE ( o.description LIKE '%Black%' OR o.name LIKE '%Black%' ) UNION ALL SELECT o2.id,o2.name FROM object o2 WHERE ( o2.description LIKE '%iPhone%' OR o2.name LIKE '%iPhone%' ) Which procude the following: id name 2 New Black iPhone 1 New White iPhone 2 New Black iPhone I would like to UNION DISTINCT, but I would also like the result ordered by the number of occurrences of each identical row (primary: id).

    Read the article

  • How to handle Foreign key for optional field in .NET

    - by brz dot net
    What is the best way to handle following situation? A dropdown(for master table) is optional in a particular form. But, In database table the field is constrained with foreign key. If user don't select from dropdown then It creates problem because of foreign key. One solution is to create default option in master table and use it in case of blank selection. but in dropdown, we need to handle this to show on top. Is it perfect solution? Is there any other optimized solution for this? Thanks

    Read the article

  • acts-as-taggable-on: find tags with name LIKE, sort by tag_counts?

    - by James
    Hi I'm using the rails plugin acts-as-taggable-onand I'm trying to find the top 5 most used tags whose names match and partially match a given query. When I do User.skill_counts.order('count DESC').limit(5).where('name LIKE ?', params[:query]) This return the following error: ActiveRecord::StatementInvalid: SQLite3::SQLException: ambiguous column name: name: SELECT tags.*, COUNT(*) AS count FROM "tags" INNER JOIN users ON users.id = taggings.taggable_id LEFT OUTER JOIN taggings ON tags.id = taggings.tag_id AND taggings.context = 'skills' WHERE (taggings.taggable_type = 'User') AND (taggings.taggable_id IN(SELECT users.id FROM "users")) AND (name LIKE 'asd') GROUP BY tags.id, tags.name HAVING COUNT(*) > 0 ORDER BY count DESC LIMIT 5 But when I do User.skill_counts.first.name this returns "alliteration" I'd appreciate any help on this matter.

    Read the article

  • How can I kill MySQL queries every 60 seconds in Windows?

    - by Ethan Allen
    I want to check my MySQL server every minute and kill queries that have run longer than 150 seconds. The main reason I want to do this is because I don't want queries from certain people to lock up the DB for everyone else. I know this is not the ultimate solution to the problem, but at least it's a fallback in case something goes wrong with a query. I don't have a slave DB (this is just an at-home project). I'd like to schedule a script to run that does this for me. I'm unfamiliar with Perl or Ruby and I need it done on my Windows 2008 Server box. I've looked into creating a simple cmd line script, but that doesn't seem to be possible. I know currently I can do something like this but I have to do it manually: mysqladmin processlist mysqladmin kill Anyone have any ideas or examples on how I could do this?

    Read the article

  • Having different database sorting order (default_scope) for two different views

    - by Juniper747
    In my model (pins.rb), I have two sorting orders: default_scope order: 'pins.featured DESC' #for adding featured posts to the top of a list default_scope order: 'pins.created_at DESC' #for adding the remaining posts beneath the featured posts This sorting order (above) is how I want my 'pins view' (index.html.erb) to look. Which is just a list of ALL user posts. In my 'users view' (show.html.erb) I am using the same model (pins.rb) to list only current_user pins. HOWEVER, I want to sorting order to ignore the "featured" default scope and only use the second scope: default_scope order: 'pins.created_at DESC' How can I accomplish this? I tried doing something like this: default_scope order: 'pins.featured DESC', only: :index default_scope order: 'pins.created_at DESC' But that didn't fly... UPDATE I updated my model to define a scope: scope :featy, order: 'pins.featured DESC' default_scope order: 'pins.created_at DESC' And updated my pins view to: <%= render @pins.featy %> However, now when I open my pins view, I get the error: undefined method `featy' for #<Array:0x00000100ddbc78> UPDATE 2 User.rb class User < ActiveRecord::Base attr_accessible :name, :email, :username, :password, :password_confirmation, :avatar, :password_reset_token, :password_reset_sent_at has_secure_password has_many :pins, dependent: :destroy #destroys user posts when user is destroyed # has_many :featured_pins, order: 'featured DESC', class_name: "Pin", source: :pin has_attached_file :avatar, :styles => { :medium => "300x300#", :thumb => "120x120#" } before_save { |user| user.email = user.email.downcase } before_save { |user| user.username = user.username.downcase } before_save :create_remember_token before_save :capitalize_name validates :name, presence: true, length: { maximum: 50 } VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i VALID_USERNAME_REGEX = /^[A-Za-z0-9]+(?:[_][A-Za-z0-9]+)*$/ validates :email, presence: true, format: { with: VALID_EMAIL_REGEX }, uniqueness: { case_sensitive: false } validates :username, presence: true, format: { with: VALID_USERNAME_REGEX }, uniqueness: { case_sensitive: false } validates :password, length: { minimum: 6 }, on: :create #on create, because was causing erros on pw_reset Pin.rb class Pin < ActiveRecord::Base attr_accessible :content, :title, :privacy, :date, :dark, :bright, :fragmented, :hashtag, :emotion, :user_id, :imagesource, :imageowner, :featured belongs_to :user before_save :capitalize_title before_validation :generate_slug validates :content, presence: true, length: { maximum: 8000 } validates :title, presence: true, length: { maximum: 24 } validates :imagesource, presence: { message: "Please search and choose an image" }, length: { maximum: 255 } validates_inclusion_of :privacy, :in => [true, false] validates :slug, uniqueness: true, presence: true, exclusion: {in: %w[signup signin signout home info privacy]} # for sorting featured and newest posts first default_scope order: 'pins.created_at DESC' scope :featured_order, order: 'pins.featured DESC' def to_param slug # or "#{id}-#{name}".parameterize end def generate_slug # makes the url slug address bar freindly self.slug ||= loop do random_token = Digest::MD5.hexdigest(Time.zone.now.to_s + title)[0..9]+"-"+"#{title}".parameterize break random_token unless Pin.where(slug: random_token).exists? end end protected def capitalize_title self.title = title.split.map(&:capitalize).join(' ') end end users_controller.rb class UsersController < ApplicationController before_filter :signed_in_user, only: [:edit, :update, :show] before_filter :correct_user, only: [:edit, :update, :show] before_filter :admin_user, only: :destroy def index if !current_user.admin? redirect_to root_path end end def menu @user = current_user end def show @user = User.find(params[:id]) @pins = @user.pins current_user.touch(:last_log_in) #sets the last log in time if [email protected]? render 'pages/info/' end end def new @user = User.new end pins_controller.rb class PinsController < ApplicationController before_filter :signed_in_user, except: [:show] # GET /pins, GET /pins.json def index #Live Feed @pins = Pin.all @featured_pins = Pin.featured_order respond_to do |format| format.html # index.html.erb format.json { render json: @pins } end end # GET /pins, GET /pins.json def show #single Pin View @pin = Pin.find_by_slug!(params[:id]) require 'uri' #this gets the photo's id from the stored uri @image_id = URI(@pin.imagesource).path.split('/').second if @pin.privacy == true #check for private pins if signed_in? if @pin.user_id == current_user.id respond_to do |format| format.html # show.html.erb format.json { render json: @pin } end else redirect_to home_path, notice: "Prohibited 1" end else redirect_to home_path, notice: "Prohibited 2" end else respond_to do |format| format.html # show.html.erb format.json { render json: @pin } end end end # GET /pins, GET /pins.json def new @pin = current_user.pins.new respond_to do |format| format.html # new.html.erb format.json { render json: @pin } end end # GET /pins/1/edit def edit @pin = current_user.pins.find_by_slug!(params[:id]) end Finally, on my index.html.erb I have: <%= render @featured_pins %>

    Read the article

  • Copy new records from datatable and identify changes in old records

    - by Betite
    Assume there are two tables: Remote_table and My_table. Remote_table has 6 columns: **PROJECT JOB_TYPE MONTH YEAR** HOURS IS_DELETED 134393 70 1 2013 30 0 134393 70 2 2013 50 0 134393 70 3 2013 80 0 134393 70 10 2012 10 0 134393 70 11 2012 0 0 134393 70 12 2012 15 0 My_table is a copy of remote_table. I tried to copy only the new records from the remote_table by this query: SELECT * FROM [remote_DB].[LudanProjectManager].[dbo].Remote_table EXCEPT SELECT * FROM My_table It works OK but I get a duplicate primary key exception when changes have been made on the remote_table on the hours column. Can anyone think of a way to copy only the new records from remote_table and if changes has been made on old records, to identify them and update the my_table to correspond?

    Read the article

  • Migrating Data to MSSQL 2008

    - by Fred Clown
    I am trying to migrate data from an Informix database to MSSQL 2008. I've got quite a lot of data to move. I've been try multiple methods to get the data over, and so far SQLBulkCopy in multiple chunks seems to be the fastest that I can find. Does anyone know of a faster means of getting the data over? I'm trying to cut down on the transfer time so that on my cut-over date I don't run out of time to do the full cut-over. Thanks.

    Read the article

  • get records from sqlite group by month

    - by peacmaker
    hi i hve an sqlite db which contain transactions each transaction has an price and has an transDate i want to retrieve the sum of the transaction group by month so the retrieved records should be like the following Price month 230 2 500 3 400 4 pleas any help

    Read the article

  • Highlight row in report?

    - by sanjeev40084
    I have a SSRS report which displays hundred of rows. I was wondering if there is anyway i can highlight the rows so that i can easily know on which row i am while accessing the report. Any thoughts?

    Read the article

  • SQLException: incorrect syntax near '2'.

    - by Tobechukwu Ezenachukwu
    whenever I call the "ExecuteNonQuery" command on the following CommandText, I get the above SQLException myCommand.CommandText = "INSERT INTO fixtures (round_id, matchcode, date_utc, time_utc, date_london, time_london, team_A_id, team_A, team_A_country, team_B_id, team_B, team_B_country, status, gameweek, winner, fs_A, fs_B, hts_A, hts_B, ets_A, ets_B, ps_A, ps_B, last_updated) VALUES (" _ & round_id & "," & match_id & "," & date_utc & ",'" & time_utc & "'," & date_london & ",'" & time_london & "'," & team_A_id & ",'" & team_A_name & "','" & team_A_country & "'," & team_B_id & ",'" & team_B_name & "','" & _ team_B_country & "','" & status & "'," & gameweek & ",'" & winner & "'," & fs_A & "," & fs_B & "," & hts_A & "," & hts_B & "," & ets_A & "," & ets_B & "," & ps_A & "," & ps_B & "," & last_updated & ")" But whenever, i remove the last table item - "last_updated", the error disappears. Please help me resolve this issue. Is there any special treatment to be given to datetime fields??? Thanks for your help

    Read the article

  • MySQL: Limit output according to associated ID

    - by Jess
    So here's my situation. I have a books table and authors table. An author can have many books... In my authors page view, the user (logged in) can click an author in a tabled row and be directed to a page displaying the author's books (collected like this URI format: viewauthorbooks.php?author_id=23), very straight forward... However, in my query, I need to display the books for the author only, and not all books stored in the books table (as i currently have!) As I am a complete novice, I used the most simple query of: SELECT * FROM tasks_tb This returns the books for me, but returns every single value (book) in the database, and not ones associated with the selected author. And when I click a different author the same books are displayed for them...I think everyone gets what I'm trying to achieve, I just don't know how to perform the query. I'm guessing that I need to start using more advanced query clauses like INNER JOIN etc. Anyone care to help me out :)

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • How to query range of data in DB2 with highest performance?

    - by Fuangwith S.
    Usually, I need to retrieve data from a table in some range; for example, a separate page for each search result. In MySQL I use LIMIT keyword but in DB2 I don't know. Now I use this query for retrieve range of data. SELECT * FROM( SELECT SMALLINT(RANK() OVER(ORDER BY NAME DESC)) AS RUNNING_NO , DATA_KEY_VALUE , SHOW_PRIORITY FROM EMPLOYEE WHERE NAME LIKE 'DEL%' ORDER BY NAME DESC FETCH FIRST 20 ROWS ONLY ) AS TMP ORDER BY TMP.RUNNING_NO ASC FETCH FIRST 10 ROWS ONLY but I know it's bad style. So, how to query for highest performance?

    Read the article

  • Procedure Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32).

    - by Nick
    The stored proc is failing at below location,Thanks, for all your help. --Insert MSOrg Information DECLARE @PersonnelNumber int, @MSOrg varchar(255) DECLARE csr CURSOR FAST_FORWARD FOR SELECT PersonnelNumber FROM Person OPEN csr FETCH NEXT FROM csr INTO @PersonnelNumber WHILE @@FETCH_STATUS = 0 BEGIN EXEC GetMSOrg @PersonnelNumber, @MSOrg out INSERT INTO PersonSubject ( PersonnelNumber ,SubjectID ,SubjectValue ,Created ,Updated ) SELECT @PersonnelNumber ,SubjectID ,@MSOrg ,getDate() ,getDate() FROM Subject WHERE DisplayName = 'MS Org' FETCH NEXT FROM csr INTO @PersonnelNumber END CLOSE csr DEALLOCATE csr Below is the stored prc defination GetMSOrg and fails at third condition CREATE PROCEDURE [dbo].[GetMSOrg] ( @PersonnelNumber int ,@OrgTerm varchar(200) out ) AS DECLARE @MDRTermID int ,@ReportsToPersonnelNbr int --Check to see if we have reached the top of the chart SELECT @ReportsToPersonnelNbr = ReportsToPersonnelNbr FROM ReportsTo WHERE PersonnelNumber = @PersonnelNumber IF (@ReportsToPersonnelNbr IS NULL) --Reached the Top of the Org Ladder BEGIN SET @OrgTerm = 'Non-standard rollup' END ELSE IF (@PersonnelNumber IN (SELECT PersonnelNumber FROM OrgTermMap)) BEGIN SELECT @OrgTerm = s.Term FROM OrgTermMap tm JOIN Taxonomy..StaticHierarchy s ON tm.OrgTermID = s.TermID WHERE tm.PersonnelNumber = @PersonnelNumber END ELSE BEGIN SELECT @MDRTermID = tm.OrgTermID FROM ReportsTo r JOIN OrgTermMap tm ON r.ReportsToPersonnelNbr = tm.PersonnelNumber WHERE r.PersonnelNumber = @PersonnelNumber IF (@MDRTermID IS NULL) BEGIN EXEC GetMSOrg @ReportsToPersonnelNbr, @OrgTerm out END ELSE BEGIN SELECT @OrgTerm = Term FROM Taxonomy..StaticHierarchy WHERE VocabID = 118 AND TermID = @MDRTermID END END GO

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • DB Interface Design Optimization: Is it better to optimise for Fewer requests of smaller data size?

    - by Overflow
    The prevailing wisdom in webservices/web requests in general is to design your api such that you use as few requests as possible, and that each request returns therefore as much data as is needed In database design, the accepted wisdom is to design your queries to minimise size over the network, as opposed to minimizing the number of queries. They are both remote calls, so what gives?

    Read the article

< Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >