Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 659/1132 | < Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >

  • How do I properly use LINQ with MySQL?

    - by Arda Xi
    I've been looking this up on Google for hours, but I haven't found anything conclusive. So far, I've seen a few paid options, an option with NHibernate, but most are marked as unstable or in production. Is there a stable implementation of LINQ for MySQL?

    Read the article

  • Insert Stored Procedure does not Create Database Record

    - by SidC
    Hello All, I have the following stored procedure: ALTER PROCEDURE Pro_members_Insert @id int outPut, @LoginName nvarchar(50), @Password nvarchar(15), @FirstName nvarchar(100), @LastName nvarchar(100), @signupDate smalldatetime, @Company nvarchar(100), @Phone nvarchar(50), @Email nvarchar(150), @Address nvarchar(255), @PostalCode nvarchar(10), @State_Province nvarchar(100), @City nvarchar(50), @countryCode nvarchar(4), @active bit, @activationCode nvarchar(50) AS declare @usName as varchar(50) set @usName='' select @usName=isnull(LoginName,'') from members where LoginName=@LoginName if @usName <> '' begin set @ID=-3 RAISERROR('User Already exist.', 16, 1) return end set @usName='' select @usName=isnull(email,'') from members where Email=@Email if @usName <> '' begin set @ID=-4 RAISERROR('Email Already exist.', 16, 1) return end declare @MemID as int select @memID=isnull(max(ID),0)+1 from members INSERT INTO members ( id, LoginName, Password, FirstName, LastName, signupDate, Company, Phone, Email, Address, PostalCode, State_Province, City, countryCode, active,activationCode) VALUES ( @Memid, @LoginName, @Password, @FirstName, @LastName, @signupDate, @Company, @Phone, @Email, @Address, @PostalCode, @State_Province, @City, @countryCode, @active,@activationCode) if @@error <> 0 set @ID=-1 else set @id=@memID Note that I've "inherited" this sproc and the database. I am trying to insert a new record from my signup.aspx page. My SQLDataSource is as follows: <asp:SqlDataSource runat="server" ID="dsAddMember" ConnectionString="rmsdbuser" InsertCommandType="StoredProcedure" InsertCommand="Pro_members_Insert" ProviderName="System.Data.SqlClient"> The click handler for btnSave is as follows: Protected Sub btnSave_Click(ByVal sender As Object, ByVal e As EventArgs) Handles btnSave.Click Try dsAddMember.DataBind() Catch ex As Exception End Try End Sub When I run this page, signup.aspx, provide required fields and click submit, the page simply reloads and the database table does not reflect the newly-inserted record. Questions: 1. How do I catch the error messages that might be returned from the sproc? 2. Please advise how to change signup.aspx so that the insert occurs. Thanks, Sid

    Read the article

  • SQLServer using too much memory

    - by Israel Pereira Valverde
    I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express. I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible. With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ? When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again. I have read about an hotfix to solve one of the errors I got from sqlserver: There is insufficient system memory in resource pool 'internal' to run this query But it's already installed on my sqlserver. Why it's using so much memory?

    Read the article

  • C# Dataset Dynamically Add DataColumn

    - by Wesley
    I am trying to add a extra column to a dataset after a query has completed. I have a database relationship of the following: Employees / \ Groups EmployeeGroups Empoyees holds all the data for that individual, I'll name the unique key the UserID. Groups holds all the groups that a employee can be a part of, i.e. Super User, Admin, User; etc. I will name the unique key GroupID EmployeeGroups holds all the associations of which groups each employee belongs too. (UserID | GroupID) What I am trying to accomplish is after querying for a all users I want to loop though each user and add what groups that user is a part of by adding a new column to the dataset named 'Groups' which is a string to insert the values of the next query to get all the groups that user is a part of. Then by user of databinding populate a listview with all employees and their group associations My code is as follows; Position 5 is the new column I am trying to add to the dataset. string theQuery = "select UserID, FirstName, LastName, EmployeeID, Active from Employees"; DataSet theEmployeeSet = itsDatabase.runQuery(theQuery); DataColumn theCol = new DataColumn("Groups", typeof(string)); theEmployeeSet.Tables[0].Columns.Add(theCol); foreach (DataRow theRow in theEmployeeSet.Tables[0].Rows) { theRow.ItemArray[5] = "1234"; } At the moment, the code will create the new column but when i assign the data to that column nothing will be assigned, what am I missing? If there is any further explination or information I can provide, please let me know. Thank you all

    Read the article

  • Select statement with multiple 'where' fields using same value without duplicating text

    - by kdbdallas
    I will start by saying that I don't think what I want can be done, but that said, I am hoping I am wrong and someone knows more than me. So here is your chance... Prove you are smarter than me :) I want to do a search against a SQLite table looking for any records that "are similar" without having to write out the query in long hand. To clarify this is how I know I can write the query: select * from Articles where title like '%Bla%' or category like '%Bla%' or post like '%Bla%' This works and is not a huge deal if you are only checking against a couple of columns, but if you need to check against a bunch then your query can get really long and nasty looking really fast, not to mention the chance for typos. (ie: 'Bla%' instead of '%Bla%') What I am wondering is if there is a short hand way to do this? *This next code does not work the way I want, but just shows kind of what I am looking for select * from Articles where title or category or post like '%Bla%' Anyone know if there is a way to specify that multiple 'where' columns should use the same search value without listing that same search value for every column? Thanks in advance!

    Read the article

  • How do I create a stored procedure that calls sp_refreshview for each view in the database?

    - by Allrameest
    Today I run this select 'exec sp_refreshview N''['+table_schema+'].['+table_name+']''' from information_schema.tables where table_type = 'view' This generates a lot of: exec sp_refreshview N'[SCHEMA].[TABLE]'. I then copy the result to the query editor window and run all those execs. How do I do this all at once? I would like to have a stored procedure called something like dev.RefreshAllViews which I can execute to do this...

    Read the article

  • openquery giving differnt results

    - by Mithil Deshmukh
    I have 2 similar queries select * from openquery(powerschool, 'select * from TEACHERS where teachernumber is not null and schoolid=''1050'' and teacherloginid is not null order by teachernumber') and SELECT * from openquery(powerschool, 'SELECT NVL(teachernumber,'''') from TEACHERS where teachernumber is not null and schoolid=''1050'' and teacherloginid is not null order by teachernumber') The first one is giving me 182 rows while the second one gives me 83. What's wrong with the queries?

    Read the article

  • slow record deletion with large ntext values

    - by asking
    I'm having trouble deleting some records via a stored procedure from a table in SQLServer 2008R2 that has ntext columns. The stored proc is timing out and running the query directly takes a very long time. The initial query was a straight "delete from y where x = z" and I've also tried running it in batches of 1000 with transactions but it is still slow and timing out in a stored proc. The majority of the records in the table will not be deleted each time (it's not just a once-off query but will be run other times). The ntext columns are not used in the where clause and I can't change the column types. Any suggestions on the quickest way to delete records with large ntext values? Thanks

    Read the article

  • Alternative to NOT EXISTS

    - by Dave Colwell
    Hi all, I have two tables linked by an ID column, lets call them Table A and table B. My goal is to find all the records in table A that have no record in table B. For instance: Table A: ID----Value 1-----value1 2-----value2 3-----value3 4-----value4 Table B ID----Value 1-----x 2-----y 4-----z 4-----l As you can see, record with ID = 3 does not exist in table B, so i want a query that will give me record 3 from table A. the way i am currently doing this is by saying AND NOT EXISTS (SELECT ID FROM TableB) but since the tables are huge, the performance on this is terrible. Also, when i tried using a Left Join where TableB.ID is null, it didnt work. Can anyone suggest an alternative?

    Read the article

  • selecting a range of verses from a database

    - by Noam Smadja
    i have a database, with verses from the bible, with those fields: book (book number), chapter (chapter number), verse (verse number), text (the verse) example: 1 1 1 In the beginning God created the heaven and the earth. first 1 is for Genesis, second 1 is for chapter 1, third 1 is for verse 1 user gives me something like 1 1:1 - 1 1:4 which means he wants to show Genesis 1:1-4. what i want to do is something like SELECT book*100000+chapter*1000+verse AS index FROM bible WHERE index >= 1001001 AND index <=1001004 or WHERE book*100000+chapter*1000+verse >= 1001001 AND book*100000+chapter*1000+verse <= 1001004

    Read the article

  • Using Report (Reporting Services) parameter values in ASP.NET page

    - by noup
    I have a report (Reporting Services) integrated into an ASP.NET that shows dropdownlists to select report parameter values. The dropdownlists are populated using direct database selects, though I see the report RDL files do contain the paramter values and datasets as defined in the report designer. Is it possible to obtain the report parameters "available values" in ASP.NET to populate the dropdownlists? This would avoid some code duplication. Update If the parameter doesn't use a query for available values, the following works: foreach (ValidValue value in this.ReportViewerControl.ServerReport.GetParameters()["myParameter"].ValidValues) { this.DropDownListControl.Items.Add(new ListItem(value.Label, value.Value)); } Still haven't found a way to access report datasets though...

    Read the article

  • How to return a record from function, executed by INSERT/UPDATE rule (trigger)?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • Why MSSQL keeps throwing me Exceptions?

    - by Augusto Càzares
    I have my project in .NET that uses a db in MSSQL Server,i'm using LINQ , sometimes when the projec throws me an exception (Constraint) in a part of the project this same error keeps showing in other part of the project when i do another thing with the db, like when i do an insertion and i had before an exception on delete the insertion throws me the delete exeption, and it remainds this way until i close and open again the project, my major problem is when this happen in my online project, this error in my project causes me problems in the project i'm testing online (i use the same db). I don't know if this exception is on the memory or something but its have been causing me a lot of headechs.

    Read the article

  • Is there an "are you sure" for stored procedure execution? :)

    - by Swoosh
    I have a stored procedure which is doing a lot of delete. Hundreds of thousands of records. It is not going to be runnable from the application, but still, i am concerned, that one of my clients accidentally runs it (i had problems earlier due to their "curiosity") :D Yes. there are backups and stuff like that, but I was thinking .... not to scare them ... is there a way to ask the user "are you sure?" before executing it? :) thanks

    Read the article

  • Customizing Mail Message in SSIS Event Handler

    - by Eric Ness
    I want to add an email notification to an SSIS 2005 package event handler. I've added a Send Mail task to the event handler. I'd like to customize the email body to include things like the error description. I've tried including @[System::ErrorDescription] in the MessageSource field, but the mail message doesn't include the value of ErrorDescription only the name of the variable.

    Read the article

  • When should we use Views, Temporary Tables and Direct Queries ? What are the Performance issues in a

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Using a CMS with an external database

    - by George Reith
    I am looking at building an external site with a CMS, probably Drupal or ExpressionEngine. The problem is that our company already has a membership database that is designed to work with our existing enterprise software. Migrating data from the database manually is not an option as modifications and new data must be accessible in real-time. Because the design of the external database will differ from the CMS's own I have decided the best way forward is to use two databases and force the CMS to use the external to read user information (cannot write to) and a local for everything else the CMS needs to do (read + write). Is this feasible with these Drupal or ExpressionEngine? Ideally I need to be able to use hooks as I do not wan't to modify core CMS files. Sifting through the docs I am not able to find what I would hook into for ether CMS. (Note: I know it is possible, but I want to know if it's feasible). Finally if there is a better way of handling this situation please also chime in. Perhaps there is something at the database level to reference a field or table in an external database? I'm clutching at straws someone can point me in the right direction I'm sure.

    Read the article

  • Get column of a mysql entry

    - by Xelluloid
    Is there a possibility to get the name of the column a database entry belongs to? Perhaps I have three columns with column names col1, col2 and col3. Now I want to select for every column the column with the maximum entry, something like this. Select name_of_column(max(col1,col2,col3)). I know that I can ask for the name of the columns by its ordinal position in the information_schema.COLUMNS table but how do I get the ordinal position of a database entry within a table?

    Read the article

  • Search sort by parameter match count in the query? PostgreSQL

    - by Ben Dauphinee
    I am working on a search query in PostgreSQL, and one of the things I do is sort my query results by the number of parameters matched. I have no clue how this can be done. Does anyone have a suggestion or solution? Table brand color type engine Ford Blue 4-door V8 Maserati Blue 2-door V12 Saturn Green 4-door V8 GM Yellow 1-door V4 Current Query SELECT brand FROM table WHERE color = 'Blue' or type = '4-door' or engine = 'V8' Result Should Be Ford (3 match) Saturn (2 match) Maserati (1 match)

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • How do I design this link table?

    - by Soo
    Ok SO, I have a user table and want to define groups of users together. The best solution I have for this is to create three database tables as follows: UserTable user_id user_name UserGroupLink group_id member_id GroupInfo group_id group_name This method keeps the member and group information separate. This is just my way of thinking. Is there a better way to do this? Also, what is a good naming convention for tables that link two other tables?

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • SaaS Multi-tenancy Applications: How is data import/export/backup being implemented?

    - by Mark Redman
    How are applications providing import / export (or backups) of data in SaaS based multi-tenancy applications, particularly single database designs? Imports: Keeping things simple I think basic imports are useful, ie CSV to a spec (or a way of providing a mapping between CSV columns and fields in the database. Exports: In single database designs I have seen XML exports and HTML (basic sitse generated) exports of data? I would assume that XML is a better option? How does one cater for relational data? Would you reference various things within XML and provide documentation of the relationships or let users figurethis out? Are vendors providing an export/backup that can be imported back in/restored? Your comments appreciated.

    Read the article

< Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >