Search Results

Search found 87891 results on 3516 pages for 'server migration'.

Page 885/3516 | < Previous Page | 881 882 883 884 885 886 887 888 889 890 891 892  | Next Page >

  • Does Microsoft Access use the PK fields for anything?

    - by chrismay
    Ok this is going to sound strange, but I have inherited an app that is an Access front end with a SQL Server backend. I am in the process of writing a new front end for it, but... we need to continue using the access front end for a while even after we deploy my new front end for reasons I won't go into. So both the existing Access app and my new app will need to be able to access and work with the data. The problem is the database design is a nightmare. For example some simple parent-child table relationships have like 4 and 5 part composite primary keys. I would REALLY like to remove these PKs and replace them with unique constraints or whatever, and add a new column to each of these tables called ID that is just an identity. If I change the PK and FKs on these tables to more managable fields, will the Access app have problems? What I mean is, does access use the meta data from the tables (PK and FK info) in such a way that it would break the app to change these?

    Read the article

  • Complex SQL design, help/advice needed

    - by eugeneK
    Hi, i have few questions for SQL gurus in here ... Briefly this is ads management system where user can define campaigns for different countries, categories, languages. I have few questions in mind so help me with what you can. Generally i'm using ASP.NET and i want to cache all result set of certain user once he asks for statistics for the first time, this way i will avoid large round-trips to server. any help is welcomed Click here for diagram with all details you need for my questions 1.Main issue of this application is to show to the user how many clicks/impressions were and how much money he spent on campaign. What is the easiest way to get this information for him? I will also include filtering by date, date ranges and few other params in this statistics table. 2.Other issue is what happens when user will try to edit campaign. Old campaign will die this means if user set 0.01$ as campaignPPU (pay-per-unit) and next day updates it to 0.05$ all will be reset to 0.05$. 3.If you could re-design some parts of table design so it would be more flexible and easier to modify, how would you do it? Thanks... sorry for so large job but it may interest some SQL guys in here

    Read the article

  • Restoring dev db from production: Running a set of SQL scripts based on a list stored in a table?

    - by mattley
    I need to restore a backup from a production database and then automatically reapply SQL scripts (e.g. ALTER TABLE, INSERT, etc) to bring that db schema back to what was under development. There will be lots of scripts, from a handful of different developers. They won't all be in the same directory. My current plan is to list the scripts with the full filesystem path in table in a psuedo-system database. Then create a stored procedure in this database which will first run RESTORE DATABASE and then run a cursor over the list of scripts, creating a command string for SQLCMD for each script, and then executing that SQLCMD string for each script using xp_cmdshell. The sequence of cursor-sqlstring-xp_cmdshell-sqlcmd feels clumsy to me. Also, it requires turning on xp_cmdshell. I can't be the only one who has done something like this. Is there a cleaner way to run a set of scripts that are scattered around the filesystem on the server? Especially, a way that doesn't require xp_cmdshell?

    Read the article

  • Database cache that I'm not aware of?

    - by Martin
    I'm using asp.net mvc, linq2sql, iis7 and sqlserver express 2008. I get these intermittent server errors, primary key conflicts on insertion. I'm using a different setup on my development computer so I can't debug. After a while they go away. Restarting iis helps. I'm getting the feeling there is cache somewhere that I'm not aware of. Can somebody help me sort out these errors? Cannot insert duplicate key row in object 'dbo.EnquiryType' with unique index 'IX_EnquiryType'. Edits regarding Venemos answer Is it possible that another application is also accessing the same database simultaneously? Yes there is, but not this particular table and no inserts or updates. There is one other table with which I experience the same problem but it has to do with a different part of the model. How often an in what context do you create a new DataContext instance? Only once, using the singleton pattern. Are the primary keys generated by the database or by the application? Database. Which version of ASP.NET MVC and which version of .NET are you using? RC2 and 3.5.

    Read the article

  • Get Mail with PHP and IMAP in Gmail just loading

    - by Oscar Godson
    I'm not sure why. Ive tried a bunch of different code. Wrote it myself, and copied other people's tutorials but every bit of code it's loading forever and eventually stops due to script processing times on the server. Does anyone know why? Oh, and IMAP is turned on, i get IMAP / Exchange on my iPhone from this same account fine. And IMAP is turned on in my version of PHP (checked with phpinfo they all say enabled.) <?php /* connect to gmail */ $hostname = '{imap.gmail.com:993/imap/ssl}INBOX'; $username = '[email protected]'; $password = 'xxxxxx'; /* try to connect */ $inbox = imap_open($hostname,$username,$password) or die('Cannot connect to Gmail: ' . imap_last_error()); /* grab emails */ $emails = imap_search($inbox,'ALL'); /* if emails are returned, cycle through each... */ if($emails) { /* begin output var */ $output = ''; /* put the newest emails on top */ rsort($emails); /* for every email... */ foreach($emails as $email_number) { /* get information specific to this email */ $overview = imap_fetch_overview($inbox,$email_number,0); $message = imap_fetchbody($inbox,$email_number,2); /* output the email header information */ $output.= '<div class="toggler '.($overview[0]->seen ? 'read' : 'unread').'">'; $output.= '<span class="subject">'.$overview[0]->subject.'</span> '; $output.= '<span class="from">'.$overview[0]->from.'</span>'; $output.= '<span class="date">on '.$overview[0]->date.'</span>'; $output.= '</div>'; /* output the email body */ $output.= '<div class="body">'.$message.'</div>'; } echo $output; } /* close the connection */ imap_close($inbox); ?

    Read the article

  • How to write a JOIN statement to combine data from disparate tables

    - by Amarundo
    I have the following 2 procedures that I use as my source for a report. As of now, I'm presenting 2 different tables in my SQL Server Reporting Services 2008 R2 report, because it doesn't let me put them together as they belong to 2 different data sets. I want to present them in a single table, but I have not been successful trying to use JOIN here. How do I do that? NOTE: cName in IAgentQueueStats corresponds to UserId in AgentActivityLog. /*** Aggregate values for Call Center Agents for calls, talk and hold time ***/ /*** The detail/row values is per 30-minute interval ***/ ALTER PROCEDURE [dbo].[sp_IAgentQueueStats_OnlyCalls_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [cName] ,sum([nAnswered]) SumNAnswered ,sum([nAnsweredAcd]) SumNAnsweredAcd ,sum([tTalkAcd]) SumTTalkAcd ,sum([nHoldAcd]) SumNHoldAcd ,sum([tHoldAcd]) SumTHoldAcd ,sum([tAcw]) SumTAcw FROM [I3_IC].[dbo].[IAgentQueueStats] WHERE dIntervalStart between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( cName ,@p_Agents)> 0 AND cReportGroup <> '*' AND cHKey3 = '*' and cHKey4 ='*' AND nEnteredAcd > 0 AND cReportGroup <> 'CCFax Email' GROUP BY cName And here is the second one: /*** Aggregate values for Call Center Agents for status/activity time ***/ /*** The detail/row values is per start-time/end-time ***/ ALTER PROCEDURE [dbo].[sp_AgentActivity_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [UserId],[StatusCategory],SUM([StateDuration]) [StatusDuration] FROM ( SELECT [UserId] ,[StatusGroup] ,[StatusKey] , CASE [StatusKey] WHEN 'Available' THEN 'Productive' WHEN 'Follow Up' THEN 'Productive' WHEN 'Campaign Call' THEN 'Productive' WHEN 'Awaiting Callback' THEN 'Productive' WHEN 'In a Meeting' THEN 'Not Your Fault' WHEN 'Project Work' THEN 'Not Your Fault' WHEN 'At a Training Session'THEN 'Not Your Fault' WHEN 'System Issues' THEN 'Not Your Fault' WHEN 'Test' THEN 'Not Your Fault' WHEN 'At Lunch' THEN 'Non Productive' WHEN 'Available, Forward' THEN 'Non Productive' WHEN 'Available, Follow-Me' THEN 'Non Productive' WHEN 'At Play' THEN 'Non Productive' WHEN 'AcdAgentNotAnswering' THEN 'Non Productive' WHEN 'Do Not Disturb' THEN 'Non Productive' WHEN 'Available, No ACD' THEN 'Non Productive' WHEN 'Away from desk' THEN 'Non Productive' ELSE [StatusKey] END StatusCategory ,stateduration FROM [I3_IC].[dbo].[AgentActivityLog] WHERE [StatusDateTime] between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( [UserId] ,@p_Agents)> 0 AND [StatusKey] not in ('Gone Home','Out of the Office','On Vacation','Out of Town') ) a GROUP BY [UserId],[StatusCategory] ORDER BY [UserId], [StatusCategory] desc BTW, if I take some time to comment/reply on your posts, it's not lack of interest, but of understanding...

    Read the article

  • What is the right path for PHP includes on a Mac?

    - by skorned
    Running Mac OS X 10.5.8, with PHP 5.2.11 Pre-installed. Using Coda 1.6.10. I'm writing PHP files, and then preview them running from file, not server. This was working fine till I tried PHP includes. These don't work as a relative path, only as an absolute from the root of the drive. Is there any way I can use statements like include_once "common/header.php"; without specifying my entire file path like so : include_once "/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0/common/base.php"; ,where ColoredLists_v1.0 is the directory with all the website files in it. I tried solutions like prepending _SERVER[DOCUMENT_ROOT] or dirname(File) to the file paths, but that didn't work as the variables were not set. Is there any easy way to do this, or a configuration I can change so that it looks in a specific directory by default instead of looking at the drive root? Currently, echo_include_path shows .: When I include this line at the start of the script, it works: set_include_path('/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0'); However, if I want to do this for all my scripts, I can't seem to make the change permanent. Even after I edited the Unix include_path in my php.ini, it doesn't seem to work.

    Read the article

  • Invoking code both before and after WebControl.Render method

    - by Dirk
    I have a set of custom ASP.NET server controls, most of which derive from CompositeControl. I want to implement a uniform look for "required" fields across all control types by wrapping each control in a specific piece of HTML/CSS markup. For example: <div class="requiredInputContainer"> ...custom control markup... </div> I'd love to abstract this behavior in such a way as to avoid having to do something ugly like this in every custom control, present and future: public class MyServerControl : TextBox, IRequirableField { public IRequirableField.IsRequired {get;set;} protected override void Render(HtmlTextWriter writer){ RequiredFieldHelper.RenderBeginTag(this, writer) //render custom control markup RequiredFieldHelper.RenderEndTag(this, writer) } } public static class RequiredFieldHelper{ public static void RenderBeginTag(IRequirableField field, HtmlTextWriter writer){ //check field.IsRequired, render based on its values } public static void RenderEndTag(IRequirableField field, HtmlTextWriter writer){ //check field.IsRequired , render based on its values } } If I was deriving all of my custom controls from the same base class, I could conceivably use Template Method to enforce the before/after behavior;but I have several base classes and I'd rather not end up with really a convoluted class hierarchy anyway. It feels like I should be able to design something more elegant (i.e. adheres to DRY and OCP) by leveraging the functional aspects of C#, but I'm drawing a blank.

    Read the article

  • I have data about deadlocks, but I can't understand why they occur

    - by Alex
    I am receiving a lot of deadlocks in my big web application. http://stackoverflow.com/questions/2941233/how-to-automatically-re-run-deadlocked-transaction-asp-net-mvc-sql-server Here I wanted to re-run deadlocked transactions, but I was told to get rid of the deadlocks - it's much better, than trying to catch the deadlocks. So I spent the whole day with SQL Profiler, setting the tracing keys etc. And this is what I got. There's a Users table. I have a very high usable page with the following query (it's not the only query, but it's the one that causes troubles) UPDATE Users SET views = views + 1 WHERE ID IN (SELECT AuthorID FROM Articles WHERE ArticleID = @ArticleID) And then there's the following query in ALL pages: User = DB.Users.SingleOrDefault(u => u.Password == password && u.Name == username); That's where I get User from cookies. Very often a deadlock occurs and this second Linq-to-SQL query is chosen as a victim, so it's not run, and users of my site see an error screen. I read a lot about deadlocks... And I don't understand why this is causing a deadlock. So obviously both of this queries run very often. At least once a second. Maybe even more often (300-400 users online). So they can be run at the same time very easily, but why does it cause a deadlock? Please help. Thank you

    Read the article

  • Business Tier | client state in desktop application- way around Stateful Session Beans?

    - by arthur
    I read positive inputs on the following posts concerning client state management: Stateful EJBs in web application?, here, and here. I want to know how implement such client state management for desktop applications (Swing, AWT, SWT, and other). let's assume there are n desktop clients supposed to use some (remote) services provided on an Application Server. How to maintain a separate and permanent data state for each client , distinct from the other without have to use using stateful session Beans (SFSB) ? is that even possible on this application type ? With Webapp(Servlets / JsSF and JSP) some can avoid using SFSB by HttpSession object and/or coupling it with stateless session beans (SLSB). HttpSession object would keep information for each (Web)client separate and the SLSB would play the business logic music. But HttpSession objects aren't present on desktop application and I go stuck with only SFSB and SLSB. Knowing the problems(Concurrency, Error Handling, usw ) of SFSB, I would have not wanted to use it. What would be the other options? is there only SFSB available? Thanks in advances

    Read the article

  • SQL dealing with rubbish in a phone number field

    - by DoctaJonez
    Hello stackers! I've got a wonderfully fun little SQL problem to solve today and thought I'd ask the community to see what solutions you come up with. We've got a really cool email to text service that we use, you just need to send an email to [email protected] and it will send a text message to the desired phone number. For example to send a text to 0790 0006006, you need to send an email to [email protected], pretty neat huh? The problem is with the phone numbers in our database. Most of the phone numbers are fine, but some of them have "rubbish" mixed in with the phone number. Take these wonderful examples of the rubbish you need to deal with (I've anonymised the phone numbers by placing zeroes in): 07800 000647(mobile) 07500 000189 USE 1ST SEE NOTES 07900 000415 HO ONLY try 1st 0770 0000694 then home 07500 000465 Cannot Requirements The solution needs to be in SQL (for MS SQL server). So the challenge is as follows, we need to get the phone number without spaces, and without any of the rubbish seen in the samples. For example: This: try 1st 0770 0000694 then home Should become this: 07700000694 Anything without a phone number in it (e.g. "SEE NOTES") should be null.

    Read the article

  • how to connect to MSSQL using activerecord, JDBC, JTDS and Integrated Security

    - by Rob
    As per the above, I've tried: establish_connection(:adapter => "jdbcmssql", :url => "jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain='mynetwork';", :username => 'user', :password=>'pass' ) establish_connection(:adapter => "jdbcmssql", :url => 'jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain="mynetwork";user="mynetwork\user"' ) establish_connection(:adapter => "jdbcmssql", :url => "jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain='mynetwork';", :username=>'user' ) establish_connection(:adapter => "jdbcmssql", :url => "jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain='mynetwork';integratedSecurity='true'", :username=>'user' ) .. and various other combinations. Each time I get: net/sourceforge/jtds/jdbc/SQLDiagnostic.java:368:in `addDiagnostic': java.sql.SQLException: Login failed for user ''. The user is not associated with a trusted SQL Server connection. (NativeException) Any tips? Thanks, activerecord (2.3.5) activerecord-jdbc-adapter (0.9.6) activerecord-jdbcmssql-adapter (0.9.6) jdbc-jtds (1.2.5) jruby 1.4.0 (ruby 1.8.7 patchlevel 174) (2009-11-02 69fbfa3) (Java HotSpot(TM) Client VM 1.6.0_18) [x86-java]

    Read the article

  • How to access web.config connection string in C#?

    - by salvationishere
    I have a 32-bit XP running VS 2008 and I am trying to decrypt my connection string from my web.config file in my C# ASPX file. Even though there are no errors returned, my current connection string doesn't display contents of my selected AdventureWorks stored procedure. I entered it: C:\Program Files\Microsoft Visual Studio 9.0\VC>Aspnet_regiis.exe -pe "connectionStrings" -app "/AddFileToSQL2" Then it said "Succeeded". And my web.config section looks like: <connectionStrings> <add name="Master" connectionString="server=MSSQLSERVER;database=Master; Integrated Security=SSPI" providerName="System.Data.SqlClient" /> <add name="AdventureWorksConnectionString" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="AdventureWorksConnectionString2" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Persist Security Info=true; " providerName="System.Data.SqlClient" /> </connectionStrings> And my C# code behind looks like: string connString = ConfigurationManager.ConnectionStrings["AdventureWorksConnectionString2"].ConnectionString; Is there something wrong with the connection string in the web.config or C# code behind file?

    Read the article

  • Can I split a single SQL 2008 DB Table into multiple filegroups, based on a discriminator column?

    - by Pure.Krome
    Hi folks, I've got a SQL Server 2008 R2 database which has a number of tables. Two of these tables contains a lot of large data .. mainly because one of them is VARBINARY(MAX) and the sister table is GEOGRAPHY. (Why two tables? Read Below if you're interested***) The data in these tables are geospatial shapes, such as zipcode boundaries. Now, the first 70K odd rows are for DataType = 1 the rest 5mil rows are for DataType = 2 Now, is it possible to split the table data into two files? so all rows that are for DataType != 2 goes into File_A and DataType = 2 goes into File_B? This way, when I backup the DB, I can skip adding File_B so my download is waaaaay smaller? Is this possible? I guessing you might be thinking - why not keep them as TWO extra tables? Mainly because in the code, the data is conceptually the same .. it's just happens that I want to split the storage of this model data. It really messes up my model if I now how two aggregates in my model, instead of one. ***Entity Framework doesn't like Tables with GEOGRAPHY, so i have to create a new table which transforms the GEOGRAPHY to VARBINARY, and then drop that into EF.

    Read the article

  • SELECT SQL Variable - should i avoid using this syntax and always use SET?

    - by Sholom
    Hi All, This may look like a duplicate to here, but it's not. I am trying to get a best practice, not a technical answer (which i already (think) i know). New to SQL Server and trying to form good habits. I found a great explanation of the functional differences between SET @var = and SELECT @var = here: http://vyaskn.tripod.com/differences_between_set_and_select.htm To summarize what each has that the other hasn't (see source for examples): SET: ANSI and portable, recommended by Microsoft. SET @var = (SELECT column_name FROM table_name) fails when the select returns more then one value, eliminating the possibility of unpredictable results. SET @var = (SELECT column_name FROM table_name) will set @var to NULL if that's what SELECT column_name FROM table_name returned, thus never leaving @var at it's prior value. SELECT: Multiple variables can be set in one statement Can return multiple system variables set by the prior DML statement SELECT @var = column_name FROM table_name would set @var to (according to my testing) the last value returned by the select. This could be a feature or a bug. Behavior can be changed with SELECT @j = (SELECT column_name FROM table_name) syntax. Speed. Setting multiple variables with a single SELECT statement as opposed to multiple SET/SELECT statements is much quicker. He has a sample test to prove his point. If you could design a test to prove the otherwise, bring it on! So, what do i do? (Almost) always use SET @var =, using SELECT @var = is messy coding and not standard. OR Use SELECT @var = freely, it could accomplish more for me, unless the code is likely to be ported to another environment. Thanks

    Read the article

  • Using Linq-To-SQL I'm getting some weird behavior doing text searches with the .Contains method. Loo

    - by Nate Bross
    I have a table, where I need to do a case insensitive search on a text field. If I run this query in LinqPad directly on my database, it works as expected Table.Where(tbl => tbl.Title.Contains("StringWithAnyCase")) // also, adding in the same constraints I'm using in my repository works in LinqPad // Table.Where(tbl => tbl.Title.Contains("StringWithAnyCase") && tbl.IsActive == true) In my application, I've got a repository which exposes IQueryable objects which does some initial filtering and it looks like this var dc = new MyDataContext(); public IQueryable<Table> GetAllTables() { var ret = dc.Tables.Where(t => t.IsActive == true); return ret; } In the controller (its an MVC app) I use code like this in an attempt to mimic the LinqPad query: var rpo = new RepositoryOfTable(); var tables = rpo.GetAllTables(); // for some reason, this does a CASE SENSITIVE search which is NOT what I want. tables = tables.Where(tbl => tbl.Title.Contains("StringWithAnyCase"); return View(tables); The column is defiend as an nvarchar(50) in SQL Server 2008. Any help or guidance is greatly appreciated!

    Read the article

  • Xml Conversion "Type mismatch" Error

    - by prema
    I am selecting a query in sql server 2005 SELECT 'Region' AS ELEMENT,(SELECT GeographyName,GeoID from @tmpTable FOR XML RAW, TYPE) FOR XML RAW('Root') This will give the output in xml as <Root ELEMENT="Region"> <row GeographyName="East" GeoID="2" /> <row GeographyName="West" GeoID="3" /> <row GeographyName="North" GeoID="4" /> <row GeographyName="South" GeoID="5" /> </Root> In aspx page, i want to get this function Populatedata(obj, val) { var xmlDom = new JXmlDom(obj, false); --> at this point i am getting error var nodeHeader = xmlDom.selectNodes("//row"); // my code goes here } function JXmlDom (xml,isFile) { this.load=load; this.loadXML=loadXML; this.selectNodes=selectNodes; this.text=text; this.selectSingleNode=selectSingleNode; this.documentElement=documentElement; this.transformNode=transformNode; if (isFile) { this.dom=this.load (xml); }else { this.dom=this.loadXML (xml); } function loadXML (xml) { if (window.ActiveXObject) { var dom=new ActiveXObject("Microsoft.XMLDOm"); dom.async=false; dom.loadXML (xml); } if (document.implementation && document.implementation.createDocument) { var domParser=new DOMParser(); var dom=domParser.parseFromString (xml,"text/xml"); } return dom; } But when i am calling this i am getting an error as Type mismatch.Can anyone help me

    Read the article

  • How can I make "month" columns in Sql?

    - by Beska
    I've got a set of data that looks something like this (VERY simplified): productId Qty dateOrdered --------- --- ----------- 1 2 10/10/2008 1 1 11/10/2008 1 2 10/10/2009 2 3 10/12/2009 1 1 10/15/2009 2 2 11/15/2009 Out of this, we're trying to create a query to get something like: productId Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec --------- ---- --- --- --- --- --- --- --- --- --- --- --- --- 1 2008 0 0 0 0 0 0 0 0 0 2 1 0 1 2009 0 0 0 0 0 0 0 0 0 3 0 0 2 2009 0 0 0 0 0 0 0 0 0 3 2 0 The way I'm doing this now, I'm doing 12 selects, one for each month, and putting those in temp tables. I then do a giant join. Everything works, but this guy is dog slow. I know this isn't much to go on, but knowing that I barely qualify as a tyro in the db world, I'm wondering if there is a better high level approach to this that I might try. (I'm guessing there is.) (I'm using MS Sql Server, so answers that are specific to that DB are fine.) (I'm just starting to look at "PIVOT" as a possible help, but I don't know anything about it yet, so if someone wants to comment about that, that might be helpful as well.)

    Read the article

  • Get Multiple Values From Database ASP.NET/C#

    - by user1043177
    I am trying to get/return multiple values from an SQL-Server database using and display them on an ASP.NET page. I am using a stored procedure to perform the SELECT command on the Database side. I am able to return the first value that matches the variable @PERSON but only one row is returned each time. Any help would be much appreciated. Database handler class public MainSQL() { _productConn = new SqlConnection(); _productConnectionString += "data source=mssql.database.co.uk;InitialCatalog=test_data;User ID=username;Password=password"; _productConn.ConnectionString = _productConnectionString; } public string GetItemName(int PersonID) { string returnvalue = string.Empty; SqlCommand myCommand = new SqlCommand("GetItem", _productConn); myCommand.CommandType = CommandType.StoredProcedure; myCommand.Parameters.Add(new SqlParameter("@PERSON", SqlDbType.Int)); myCommand.Parameters[0].Value = PersonID; _productConn.Open(); returnvalue = (string)myCommand.ExecuteScalar(); _productConn.Close(); return (string)returnvalue; } Stored Procedure USE [test_data] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [ppir].[GetItem] ( @PERSON int ) AS /*SET NOCOUNT ON;*/ SELECT Description FROM [Items] WHERE PersonID = @PERSON RETURN return.aspx namespace test { public partial class Final_Page : System.Web.UI.Page { MainSQL GetInfo; protected void Page_Load(object sender, EventArgs e) { int PersonId = (int)Session["PersonID"]; GetInfo = new MainSQL(); string itemname = GetInfo.GetItemName(PersonId); ReturnItemName.Text = itemname; } // End Page_Load } // End Class } // End Namespace

    Read the article

  • Best approach to cache Counts from SQL tables ?

    - by pixel3cs
    I would like to develop a Forum from scratch, with special needs and customization. I would like to prepare my forum for intensive usage and wondering how to cache things like User posts count and User replies count. Having only three tables, tblForum, tblForumTopics, tblForumReplies, what is the best approach of cache the User topics and replies counts ? Think at a simple scenario: user press a link and open the Replies.aspx?id=x&page=y page, and start reading replies. On the HTTP Request, the server will run an SQL command wich will fetch all replies for that page, also "inner joining with tblForumReplies to find out the number of User replies for each user that replied." select tblForumReplies.*, tblFR.TotalReplies from tblForumReplies inner join ( select IdRepliedBy, count(*) as TotalReplies from tblForumReplies group by IdRepliedBy ) as tblFR on tblFR.IdRepliedBy = tblForumReplies.IdRepliedBy Unfortunately this approach is very cpu intensive, and I would like to see your ideas of how to cache things like table Counts. If counting replies for each user on insert/delete, and store it in a separate field, how to syncronize with manual data changing. Suppose I will manually delete Replies from SQL.

    Read the article

  • Performance when querying a View

    - by Nate Bross
    I'm wondering if this is a bad practice or if in general this is the correct approach. Lets say that I've created a view that combines a few attributes from a few tables. My question, what do I need to do so I can query against this view as if it were a table without worrying about performance? All attributes in the original tables are indexed, my concern is that the result view will have hundreds of thousands of records, which I will want to narrow down quite a bit based on user input. What I'd like to avoid, is having multiple versions of the code that generates this view floating around with a few extra "where" conditions to facilitate the user input filtering. For example, assume my view has this header VIEW(Name, Type, DateEntered) this may have 100,000+ rows (possibly millions). I'd like to be able to make this view in SQL Server, and then in my application write querlies like this: SELECT Name, Type, DateEntered FROM MyView WHERE DateEntered BETWEEN @date1 and @date2; Basically, I am denormalizing my data for a series of reports that need to be run, and I'd like to centralize where I pull the data from, maybe I'm not looking at this problem from the right angle though, so I'm open to alternative ways to attack this.

    Read the article

  • Fixing an SQL where statement that is ugly and confusing

    - by Mike Wills
    I am directly querying the back-end MS SQL Server for a software package. The key field (vehicle number) is defined as alpha though we are entering numeric value in the field. There is only one exception to this, we place an "R" before the number when the vehicle is being retired (which means we sold it or the vehicle is junked). Assuming the users do this right, we should not run into a problem using this method. (Right or wrong isn't the issue here) Fast forward to now. I am trying to query a subset of these vehicle numbers (800 - 899) for some special processing. By doing a range of '800' to '899' we also get 80, 81, etc. If I cast the vehicle number into an INT, I should be able to get the right range. Except that these "R" vehicles are kicking me in the butt now. I have tried where vehicleId not like 'R%' and cast(vehicleId as int) between 800 and 899 however, I get a casting error on one of these "R" vehicles. What does work is where vehicleId not between '800' and '899' and cast(vehicleId as int) between 800 and 899', but I feel there has to be a better way and less confusing way. I have also tried other variations with HAVING and a sub-query all producing a casting error.

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • Stored procedure or function expects parameter which is not supplied

    - by user2920046
    I am trying to insert data into a SQL Server database by calling a stored procedure, but I am getting the error Procedure or function 'SHOWuser' expects parameter '@userID', which was not supplied. My stored procedure is called "SHOWuser". I have checked it thoroughly and no parameters is missing. My code is: public void SHOWuser(string userName, string password, string emailAddress, List preferences) { SqlConnection dbcon = new SqlConnection(conn); try { SqlCommand cmd = new SqlCommand(); cmd.Connection = dbcon; cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "SHOWuser"; cmd.Parameters.AddWithValue("@userName", userName); cmd.Parameters.AddWithValue("@password", password); cmd.Parameters.AddWithValue("@emailAddress", emailAddress); dbcon.Open(); int i = Convert.ToInt32(cmd.ExecuteScalar()); cmd.Parameters.Clear(); cmd.CommandText = "tbl_pref"; foreach (int preference in preferences) { cmd.Parameters.Clear(); cmd.Parameters.AddWithValue("@userID", Convert.ToInt32(i)); cmd.Parameters.AddWithValue("@preferenceID", Convert.ToInt32(preference)); cmd.ExecuteNonQuery(); } } catch (Exception) { throw; } finally { dbcon.Close(); } and the stored procedure is: ALTER PROCEDURE [dbo].[SHOWuser] -- Add the parameters for the stored procedure here ( @userName varchar(50), @password nvarchar(50), @emailAddress nvarchar(50) ) AS BEGIN INSERT INTO tbl_user(userName,password,emailAddress) values(@userName,@password,@emailAddress) select tbl_user.userID,tbl_user.userName,tbl_user.password,tbl_user.emailAddress, stuff((select ',' + preferenceName from tbl_pref_master inner join tbl_preferences on tbl_pref_master.preferenceID = tbl_preferences.preferenceID where tbl_preferences.userID=tbl_user.userID FOR XML PATH ('')),1,1,' ' ) AS Preferences from tbl_user SELECT SCOPE_IDENTITY(); END Pls help, Thankx in advance...

    Read the article

  • List all foreign key constraints that refer to a particular column in a specific table

    - by Sid
    I would like to see a list of all the tables and columns that refer (either directly or indirectly) a specific column in the 'main' table via a foreign key constraint that has the ON DELETE=CASCADE setting missing. The tricky part is that there would be an indirect relationships buried across up to 5 levels deep. (example: ... great-grandchild- FK3 = grandchild = FK2 = child = FK1 = main table). We need to dig up the leaf tables-columns, not just the very 1st level. The 'good' part about this is that execution speed isn't of concern, it'll be run on a backup copy of the production db to fix any relational issues for the future. I did SELECT * FROM sys.foreign_keys but that gives me the name of the constraint - not the names of the child-parent tables and the columns in the relationship (the juicy bits). Plus the previous designer used short, non-descriptive/random names for the FK constraints, unlike our practice below The way we're adding constraints into SQL Server: ALTER TABLE [dbo].[UserEmailPrefs] WITH CHECK ADD CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] FOREIGN KEY([UserId]) REFERENCES [dbo].[UserMasterTable] ([UserId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[UserEmailPrefs] CHECK CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] GO The comments in this SO question inpire this question.

    Read the article

< Previous Page | 881 882 883 884 885 886 887 888 889 890 891 892  | Next Page >