Search Results

Search found 9396 results on 376 pages for 'stored procedures'.

Page 47/376 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • How hard is it to determine what procedures/functions no longer compile?

    - by Dave
    In sql server how difficult would it be to determine what procedures/functions no longer compile? In other words, if I scripted out alter statements for all procedures and functions in a database, I'd like to know which of these statements are going to fail? I've been working on cleaning up a database I've inherited which has gone through years of changes and I'd like to what objects are going to raise errors when something tries to execute it.

    Read the article

  • How security of the systems might be improved using database procedures?

    - by Centurion
    The usage of Oracle PL/SQL procedures for controlling access to data often emphasized in PL/SQL books and other sources as being more secure approach. I'v seen several systems where all business logic related with data is performed through packages, procedures and functions, so application code becomes quite "dumb" and is only responsible for visualization part. I even heard some devs call such approaches and driving architects as database nazi :) because all logic code resides in database. I do know about DB procedure performance benefits, but now I'm interested in a "better security" when using thick client model. I assume such design mostly used when Oracle (and maybe MS SQL Server) databases are used. I do agree such approach improves security but only if there are not much users and every system user has a database account, so we might control and monitor data access through standard database user security. However, how such approach could increase the security for an average web system where thick clients are used: for example one database user with DML grants on all tables, and other users are handled using "users" and"user_rights" tables? We could use DB procedures, save usernames into context use that for filtering but vulnerability resides at the root - if the main database account is compromised than nothing will help. Of course in a real system we might consider at least several main users (for example frontend_db_user, backend_db_user).

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • Using stored procedure to call multiple packages at the same time from SSIS Catalog (SSISDB.catalog.start_execution) resulted in deadlock

    - by Kevin Shyr
    Refer to my previous post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx) about dynamic package calling and multiple packages execution in these posts: I only saw this twice, other times the stored procedure was able to call the packages successfully.  After the service pack, I haven't seen it...yet. http://support.microsoft.com/kb/2699720

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • In what kind of variable type is the player position stored on a MMORPG such as WoW?

    - by jokoon
    I even heard J. Carmack quickly talk about it... How a software can track a player's position so accurately, being on a such huge world, without loading between zones, and on a multiplayer scale ? How is the data formatted when it passes through the netcode ? I can understand how vertices are stored into the graphic card's memory, but when it comes to synchronize the multiplayer, I can't imagine what is best.

    Read the article

  • Performance optimization for mssql: decrease stored procedures execution time or unload the server?

    - by tim
    Hello everybody! We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute. Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service: 1) Try to decrease the stored procedures execution time 2) Try to find the way how to unload the server It is interesting to hear from people who deal with booking sites. Thanks!

    Read the article

  • How do I call Informix stored procedures from Perl?

    - by superjtc
    How do I call informix stored procedures from Perl? I use DBD::ODBC to connect to informix database, but I don't know how to call procedures. my code like this: my $dbh=DBI->connect("dbi:".DBDRIVE.":".DBNAME,DBUSER,DBPASS,{RaiseError=>0,PrintError=>0,AutoCommit=>1}) ||die $DBI::errstr; $dbh->do("execute procedure sp_test('2010-05-01 00:00:00')") || warn "failed\n"; $dbh->disconnect(); when I run it, it didn't get error. But I get nothing when I check the database. the store procedure works fine if I run it in database directly. Anyone can help me out?

    Read the article

  • Where are the Microsoft downloaded app compat updates stored?

    - by Ian Boyd
    Where are the Microsoft application compatibility update settings stored on a Windows XP, Windows Vista, and Windows 7 computer? Microsoft periodically release application compatibility updates (e.g. KB929427), where they list the shims that should be applied to a program in order to workaround known bugs in the software. Where are these app compat flags stored, and how can i see what shims are being applied? i have a feeling that a recent app compat update included a flag to force a particular piece of software, that we use, to require administrator. Because the task is scheduled to run nightly, and the running user does not have administrative privelages, the task is failing to start. The application is requiring to be elevated. It has the UAC shield overlay. The application has no RT_MANIFEST resource, and the compatibility option Run this program as administrator is disabled (per-user and all users). So all that's left is some secret global setting. i know user-specified compat flags are stored in: HKEY_LOCAL_MACHINE \SOFTWARE \Microsoft \Windows NT \CurrentVersion \AppCompatFlags \Layers

    Read the article

  • Oracle Tutor: Learn Tutor in the comfort of your own home or office

    - by emily.chorba(at)oracle.com
    The primary challenge for companies faced with documenting policies and procedures is to realize that they can do this documentation in-house, with existing resources, using Oracle Tutor. Procedure documentation is a critical success component for supporting corporate governance or other regulatory compliance initiatives and when implementing or upgrading to a new business application. There are over 1000 Oracle Tutor customers worldwide that have used Tutor to create, distribute, and maintain their business procedures. This is easily accomplished because of Tutor's: Ease of use by those who have to write procedures (Microsoft Word based authoring) Ease of company-wide implementation (complex document management activities are centralized) Ease of use by workers who have to follow the procedures (play script format)Ease of access by remote workers (web-enabled) Oracle University is offering Live Virtual Tutor classes! The class lasts four days, starts on Tuesday and finishes on Friday. This course is an introduction to the Oracle Tutor suite of products. It focuses on the Policy and Procedure writing feature set of the Tutor applications. Participants will learn about writing procedures and maintaining these particular process document types, all using the Tutor method. The next three classes are scheduled for: April 19 - 22 May 31 - June 3 July 5 - 8 You will learn to: Write procedures Create procedure Flowcharts Write support documents Create Impact Analysis Reports Create Role-base Employee Manuals Deploy online Employee Manuals on an Intranet Enjoy learning Tutor in your local environment. Start the sign up process from this link Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & BPM

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • MSSQL: How to copy a file (pdf, doc, txt...) stored in a varbinary(max) field to a file in a CLR sto

    - by user193655
    I ask this question as a followup of this question. A solution that uses bcp and xp_cmdshell, that is not my desired solution, has been posted here: stackoverflow.com/questions/828749/ms-sql-server-2005-write-varbinary-to-file-system (sorry i cannot post a second hyperlink since my reputation is les than 10). I am new to c# (since I am a Delphi developer) anyway I was able to create a simple CLR stored procedures by following a tutorial. My task is to move a file from the client file system to the server file system (the server can be accessed using remote IP, so I cannot use a shared folder as destination, this is why I need a CLR stored procedure). So I plan to: 1) store from Delphi the file in a varbinary(max) column of a temporary table 2) call the CLR stored procedure to create a file at the desired path using the data contained in the varbinary(max) field Imagine I need to move C:\MyFile.pdf to Z:\MyFile.pdf, where C: is a harddrive on local system and Z: is an harddrive on the server. I provide the code below (not working) that someone can modify to make it work? Here I suppose to have a table called MyTable with two fields: ID (int) and DATA (varbinary(max)). Please note it doesn't make a difference if the table is a real temporary table or just a table where I temporarly store the data. I would appreciate if some exception handling code is there (so that I can manage an "impossible to save file" exception). I would like to be able to write a new file or overwrite the file if already existing. [Microsoft.SqlServer.Server.SqlProcedure] public static void VarbinaryToFile(int TableId) { using (SqlConnection connection = new SqlConnection("context connection=true")) { connection.Open(); SqlCommand command = new SqlCommand("select data from mytable where ID = @TableId", connection); command.Parameters.AddWithValue("@TableId", TableId); // This was the sample code I found to run a query //SqlContext.Pipe.ExecuteAndSend(command); // instead I need something like this (THIS IS META_SYNTAX!!!): SqlContext.Pipe.ResultAsStream.SaveToFile('z:\MyFile.pdf'); } } (one subquestion is: is this approach coorect or there is a way to directly pass the data to the CLR stored procedure so I don't need to use a temp table?) If the subquestion's answer is No, could you describe the approach of avoiding a temp table? So is there a better way then the one I describe above (=temp table + Stored procedure)? A way to directly pass the dataastream from the client application to the CLR stored procedure? (my files can be any size but also very big)

    Read the article

  • How to make Unit Tests to make sure stored procedure is deleting row from the database?

    - by aspdotnetuser
    I'm new to unit testing and I need some help with the following. I have created a small project to help me learn how to make Unit Tests. The functionality for one of the forms in my application deletes a user from the User table (and other rows in mapping tables). Currently, the unit test I have created to test this sets up the required objects and then calls the business rules method (passing in the user id) which calls the data access method to execute the stored procedure that deletes the rows in the tables. Is this the correct method to test whether something is being deleted successfully? Should the unit test / setup method first insert some test data which the unit test then deletes?

    Read the article

  • Same data being returned by linq for 2 different executions of a stored procedure?

    - by Paul
    Hello I have a stored procedure that I am calling through Entity Framework. The stored procedure has 2 date parameters. I supply different argument in the 2 times I call the stored procedure. I have verified using SQL Profiler that the stored procedure is being called correctly and returning the correct results. When I call my method the second time with different arguments, even though the stored procedure is bringing back the correct results, the table created contains the same data as the first time I called it. dtStart = 01/08/2009 dtEnd = 31/08/2009 public List<dataRecord> GetData(DateTime dtStart, DateTime dtEnd) { var tbl = from t in db.SP(dtStart, dtEnd) select t; return tbl.ToList(); } GetData((new DateTime(2009, 8, 1), new DateTime(2009, 8, 31)) // tbl.field1 value = 45450 - CORRECT GetData(new DateTime(2009, 7, 1), new DateTime(2009, 7, 31)) // tbl.field1 value = 45450 - WRONG 27456 expected Is this a case of Entity Framework being clever and caching? I can't see why it would cache this though as it has executed the stored procedure twice. Do I have to do something to close tbl? using Visual Studio 2008 + Entity Framework. I also get the message "query cannot be enumerated more than once" a few times every now and then, am not sure if that is relevant? FULL CODE LISTING namespace ProfileDataService { public partial class DataService { public static List<MeterTotalConsumpRecord> GetTotalAllTimesConsumption(DateTime dtStart, DateTime dtEnd, EUtilityGroup ug, int nMeterSelectionType, int nCustomerID, int nUserID, string strSelection, bool bClosedLocations, bool bDisposedLocations) { dbChildDataContext db = DBManager.ChildDataConext(nCustomerID); var tbl = from t in db.GetTotalConsumptionByMeter(dtStart, dtEnd, (int) ug, nMeterSelectionType, nCustomerID, nUserID, strSelection, bClosedLocations, bDisposedLocations, 1) select t; return tbl.ToList(); } } } /// CALLER List<MeterTotalConsumpRecord> _P1Totals; List<MeterTotalConsumpRecord> _P2Totals; public void LoadData(int nUserID, int nCustomerID, ELocationSelectionMethod locationSelectionMethod, string strLocations, bool bIncludeClosedLocations, bool bIncludeDisposedLocations, DateTime dtStart, DateTime dtEnd, ReportsBusinessLogic.Lists.EPeriodType durMainPeriodType, ReportsBusinessLogic.Lists.EPeriodType durCompareToPeriodType, ReportsBusinessLogic.Lists.EIncreaseReportType rptType, bool bIncludeDecreases) { ///Code for setting properties using parameters.. _P2Totals = ProfileDataService.DataService.GetTotalAllTimesConsumption(_P2StartDate, _P2EndDate, EUtilityGroup.Electricity, 1, nCustomerID, nUserID, strLocations, bIncludeClosedLocations, bIncludeDisposedLocations); _P1Totals = ProfileDataService.DataService.GetTotalAllTimesConsumption(_StartDate, _EndDate, EUtilityGroup.Electricity, 1, nCustomerID, nUserID, strLocations, bIncludeClosedLocations, bIncludeDisposedLocations); PopulateLines() //This fills up a list of objects with information for my report ready for the totals to be added PopulateTotals(_P1Totals, 1); PopulateTotals(_P2Totals, 2); } void PopulateTotals(List<MeterTotalConsumpRecord> objTotals, int nPeriod) { MeterTotalConsumpRecord objMeterConsumption = null; foreach (IncreaseReportDataRecord objLine in _Lines) { objMeterConsumption = objTotals.Find(delegate(MeterTotalConsumpRecord t) { return t.MeterID == objLine.MeterID; }); if (objMeterConsumption != null) { if (nPeriod == 1) { objLine.P1Consumption = (double)objMeterConsumption.Consumption; } else { objLine.P2Consumption = (double)objMeterConsumption.Consumption; } objMeterConsumption = null; } } } }

    Read the article

  • Stored Procedure To Search the AccessRights given to the Users.

    - by thevan
    Hi, I want to display the Access Rights given to the Users for the particular module. I have Seven Tables such as RoleAccess, Roles, Functions, Module, SubModule, Company and Unit. RoleAccess is the Main Table. The AccessRights given will be stored in the RoleAccess Table only. RoleAccess Table has the following columns such as RoleID, CompanyID, UnitID, FunctionID, ModuleID, SubModuleID, Create, Update, Delete, Read, Approve. Here Create_f, Update_f, Delete_f, Read_f and Approve_f are flags. Company Table has two columns such as CompanyID and CompanyName. Unit Table has three columns such as UnitID, UnitName and CompanyID. Roles Table has four columns such as RoleID, RoleName, CompanyID and UnitID. Module Table has two columns such as ModuleID and ModuleName. SubModule Table has three columns such as ModuleID, SubModuleID, SubModuleName. Functions Table has five columns such as FunctionID, FunctionName, ModuleID and SubModuleID. At First, The RoleAccess Table does not contain any records. So I want to display the ModuleName, SubModuleName, FunctionName, CompanyID, RoleID, UnitID, FunctionID, ModuleID, SubModuleID, Create_f, Update_f, Delete_f, Read_f and Approve_f. If the AccessRights is assigned to the Particular RoleID means the flags in the search results will be 1 else it will be 0. I have witten one stored procedure but it displays the records based on the RoleID stored in the RoleAccess table. But I also want to display the Flags as 0 for the Roles not stored in the RoleAccess Table. I want the Stored Procedure for this. Any one please help me.

    Read the article

  • Where should the database and mail parameters be stored in a Symfony2 app?

    - by Songo
    In the default folder structure for a Symfony2 project the database and mail server credentials are stored in parameters.yml file inside ProjectRoot/app/config/parameters.yml with these default values: parameters: database_driver: pdo_mysql database_host: 127.0.0.1 database_port: null database_name: symfony database_user: root database_password: null mailer_transport: smtp mailer_host: 127.0.0.1 mailer_user: null mailer_password: null locale: en secret: ThisTokenIsNotSoSecretChangeIt During development we change these parameters to the development database and mail servers. This file is checked into the source code repository. The problem is when we want to deploy to the production server. We are thinking about automating the deployment process by checking out the project from git and deploy it to the production server. The thing is that our project manager has to manually update these parameters after each update. The production database and mail servers parameters are confidential and only our project manager knows them. I need a way to automate this step and suggestion on where to store the production parameters until they are applied?

    Read the article

  • How can I make exception handling for all existing SQL server 2005 Stored Procedures, view and funct

    - by Space Cracker
    we have a portal that have SQL server 2005 database that contain about 1750 stored procedures , 250 view and 200 function and 95% of them not have handling exception in their code .. we search about any way that allow us making such a global exception handling in SQL that receive any exception happen in any SP,view or function and stored it in a table we made .. is there something like this in SQL server 2005 or we must write exception handling code on each item ?

    Read the article

  • How to display lyrics that are stored in MP3-Tag?

    - by Der_Techniker
    How can I achieve that goal? I tried many players - banshee, rhythmbox, amarok, exaile... none of them displays lyrics that are already stored in the mp3. They always try to fetch lyrics from the internet. Interestingly, banshee supports STORING lyrics into the mp3 but not READING - I find that annoying... One player though does it properly - gmusicbrowser. But this piece of software has a so confusing user interface - I don't wanna use it. Any ideas?

    Read the article

  • How to get variable value from inside a stored procedure in DB2?

    - by Sylvia
    This seems like it should be very easy...anyway, it is in MS SQL Server In a DB2 stored procedure, how can I just get the value of a variable? Say I have the following stored procedure: CREATE PROCEDURE etl.TestABC( ) LANGUAGE SQL BEGIN declare Stmt varchar(2048); set Stmt = 'this is a test'; -- print Stmt; -- select Stmt; return 0; END @ I'd like to print out the value of Stmt after I set it. Print doesn't work, select doesn't work. Somebody said I have to insert it to a table first, and then get it after I run the stored procedure. Is this really necessary? Thanks, Sylvia EDIT: I should have made clearer that I want to see the value of Stmt each time after I set it, and I may need to set it multiple times within the same stored procedure.

    Read the article

  • Passing huge amounts of data as an hexadecimal (0x123AB...) parameter of a clr stored procedure in s

    - by user193655
    I post this question has followup of This question, since the thread is not recieving more answers. I'm trying to understand if it is possible to pass as a parameter of a CLR stored procedure a large amount of data as "0x5352532F...". This is to avoid to send the data directly to the CLR stored procedure, instead of sending ti to a temporary DB field and from there passing it as varbinary(max) parmeter to the CLR stored procedure. I have a triple question: 1) is it possible, if yes how? Let's say i want to pass a pdf file to the CLR stored procedure (not the path, the full bits that make up the file). Something like: exec MyCLRStoredProcs.dbo.insertfile @file_remote_path ='c:\temp\test_file.txt' , @file_contents=0x4D5A90000300000004000.... --(this long list is the file content) where insertfile is a stored proc that writes to the server path (at file_remote_path) the binary data I pass as (file_contents). 2) is it there corruption risk of adopting this approach (or it is the same approach that sql server uses behind the scenes)? 3) how to convert the content of a file into the "0x23423..." hexadecimal representation

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >