Search Results

Search found 9299 results on 372 pages for 'policy and procedure manual'.

Page 85/372 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Getting a ResultSet/RefCursor over a database link

    - by JonathanJ
    From the answers to http://stackoverflow.com/questions/1122175/calling-a-stored-proc-over-a-dblink it seems that it is not possible to call a stored procedure and get the ResultSet/RefCursor back if you are making the SP call across a remote DB link. We are also using Oracle 10g. We can successfully get single value results across the link, and can successfully call the SP and get the results locally but we get the same 'ORA-24338: statement handle not executed' error when reading the ResultSet from the remote DB. My question - is there any workaround to using the stored procedure? Is a shared view a better solution? Piped rows? Sample Stored Procedure: CREATE OR REPLACE PACKAGE BODY example_SP IS PROCEDURE get_terminals(p_CD_community IN community.CD_community%TYPE, p_cursor OUT SYS_REFCURSOR) IS BEGIN OPEN p_cursor FOR SELECT cd_terminal FROM terminal t, community c WHERE c.cd_community = p_CD_community AND t.id_community = c.id_community; END; END example_SP; / Sample Java code that works locally but not remotely: Connection conn = DBConnectionManagerFactory.getDBConnectionManager().getConnection(); CallableStatement cstmt = null; ResultSet rs = null; String community = "EXAMPLE"; try { cstmt = conn.prepareCall("{call example_SP.get_terminals@remote_address(?,?)}"); cstmt.setString(1, community); cstmt.registerOutParameter(2, OracleTypes.CURSOR); cstmt.execute(); rs = (ResultSet)cstmt.getObject(2); while (rs.next()) { LogUtil.getLog().logInfo("Terminal code=" + rs.getString( "cd_terminal" )); } }

    Read the article

  • Partial mapping in Entity Framework 4

    - by Dimi Toulakis
    Hi guys, I want to be able to do the following: I have a model and inside there I do have an entity. This entity has the following structure: public class Client { public int Id { get; set; } public string Name { get; set; } public string Description { get; set; } } What I want now, is to just get the client name based on the id. Therefore I wrote a stored procedure which is doing this. CREATE PROCEDURE [Client].[GetBasics] @Id INT AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; SELECT Name FROM Client.Client INNER JOIN Client.Validity ON ClientId = Client.Id WHERE Client.Id = @Id; END Now, going back to VS, I do update the model from the database with the stored procedure included. Next step is to map this stored procedure to the client entity as a function import. This also works fine. Trying now to load one client's name results into an error during runtime... "The data reader is incompatible with the specified 'CSTestModel.Client'. A member of the type, 'Id', does not have a corresponding column in the data reader with the same name." I am OK with the message. I know how to fix this (returning as resultset Id, Name, Description). My idea behind this question is the following: I just want to load parts of the entity, not the complete entity itself. Is there a solution to my problem (except creating complex types)? And if yes, can someone point me to the right direction? Many thanks, Dimi

    Read the article

  • Help with dynamic-wind and call/cc

    - by josh
    I am having some trouble understanding the behavior of the following Scheme program: (define c (dynamic-wind (lambda () (display 'IN)(newline)) (lambda () (call/cc (lambda (k) (display 'X)(newline) k))) (lambda () (display 'OUT)(newline)))) As I understand, c will be bound to the continution created right before "(display 'X)". But using c seems to modify itself! The define above prints (as I expected) IN, X and OUT: IN X OUT And it is a procedure: #;2> c #<procedure (a9869 . results1678)> Now, I would expect that when it is called again, X would be printed, and it is not! #;3> (c) IN OUT And now c is not a procedure anymore, and a second invokation of c won't work! #;4> c ;; the REPL doesn't answer this, so there are no values returned #;5> (c) Error: call of non-procedure: #<unspecified> Call history: <syntax> (c) <eval> (c) <-- I was expecting that each invokation to (c) would do the same thing -- print IN, X, and OUT. What am I missing?

    Read the article

  • How can I detect message boxes popping up in another process?

    - by Frerich Raabe
    I'd like to execute some code whenever a (any!) message box (as spawned by the MessageBox Function) is shown in another process. I didn't start the process I'm monitoring. I can think of three approaches: Install a global CBT Hook procedure which tells me whenever a window is created on the desktop. Then, check whether the window belongs to the process I'm monitoring and whether the class name is #32770 (which is the class name of dialogs according to the About Window Classes page at the MSDN). This would probably work, but it would pull the DLL which contains the hook procedure into virtually every process on the desktop, and the hook procedure gets called a lot. It smells like a potential perfomance problem. Try to subclass the #32770 system window class (is this possible at all?) and look for WM_CREATE messages in my custom window procedure. Intercept the MessageBox Function API call (even though the remote process is running already!) and call my code from the hook function. So far, I only know that the first idea is feasible, but it seems really inefficient. Can anybody think of a simpler solution than that to this problem?

    Read the article

  • NHibernate with nothing but stored procedures

    - by ChrisB2010
    I'd like to have NHibernate call a stored procedure when ISession.Get is called to fetch an entity by its key instead of using dynamic SQL. We have been using NHibernate and allowing it to generate our SQL for queries and inserts/updates/deletes, but now may have to deploy our application to an environment that requires us to use stored procedures for all database access. We can use sql-insert, sql-update, and sql-delete in our .hbm.xml mapping files for inserts/updates/deletes. Our hql and criteria queries will have to be replaced with stored procedure calls. However, I have not figured out how to force NHibernate to use a custom stored procedure to fetch an entity by its key. I still want to be able to call ISession.Get, as in: using (ISession session = MySessionFactory.OpenSession()) { return session.Get<Customer>(customerId); } and also lazy load objects, but I want NHibernate to call my "GetCustomerById" stored procedure instead of generating the dynamic SQL. Can this be done? Perhaps NHibernate is no longer a fit given this new environment we must support.

    Read the article

  • SendMessage to window created by AllocateHWND cause deadlock

    - by user2704265
    In my Delphi project, I derive a thread class TMyThread, and follow the advice from forums to use AllocateHWnd to create a window handle. In TMyThread object, I call SendMessage to send message to the window handle. When the messages sent are in small volume, then the application works well. However, when the messages are in large volume, the application will deadlock and lose responses. I think may be the message queue is full as in LogWndProc, there are only codes to process the message, but no codes to remove the messages from the queue, that may cause all the processed messages still exist in the queue and the queue becomes full. Is that correct? The codes are attached below: var hLogWnd: HWND = 0; procedure TForm1.FormCreate(Sender: TObject); begin hLogWnd := AllocateHWnd(LogWndProc); end; procedure TForm1.FormDestroy(Sender: TObject); begin if hLogWnd <> 0 then DeallocateHWnd(hLogWnd); end; procedure TForm1.LogWndProc(var Message: TMessage); var S: PString; begin if Message.Msg = WM_UPDATEDATA then begin S := PString(msg.LParam); try List1.Items.Add(S^); finally Dispose(S); end; end else Message.Result := DefWindowProc(hLogWnd, Message.Msg, Message.WParam, Message.LParam); end; procedure TMyThread.SendLog(I: Integer); var Log: PString; begin New(Log); Log^ := 'Log: current stag is ' + IntToStr(I); SendMessage(hLogWnd, WM_UPDATEDATA, 0, LPARAM(Log)); Dispose(Log); end;

    Read the article

  • can't commit or rollback, MySQL out of sync error on .net

    - by sergiogx
    Im having trouble with a stored procedure, I can't commit after I execute it. Its showing this error "[System.Data.Odbc.OdbcException] = {"ERROR [HY000] [MySQL][ODBC 5.1 Driver]Commands out of sync; you can't run this command now"}" The SP by itself works fine. does anyone have idea of what might be happening? .net code: [WebMethod()] [SoapHeader("sesion")] public Boolean aceptarTasaCero(int idMunicipio, double valor) { Boolean resultado = false; OdbcConnection cxn = new OdbcConnection(); cxn.ConnectionString = ConfigurationManager.ConnectionStrings["mysqlConnection"].ConnectionString; cxn.Open(); OdbcCommand cmd = new OdbcCommand("call aceptarTasaCero(?,?)", cxn); cmd.Parameters.Add("idMunicipio", OdbcType.Int).Value = idMunicipio; cmd.Parameters.Add("valor", OdbcType.Double).Value = valor; cmd.Transaction = cxn.BeginTransaction(); try { OdbcDataReader dr = cmd.ExecuteReader(); if (dr.Read()) { resultado = Convert.ToBoolean(dr[0]); } cmd.Transaction.Commit(); } catch (Exception ex) { ExBi.log(ex, sesion.idUsuario); cmd.Transaction.Rollback(); } finally { cxn.Close(); } return resultado; } and this is the code for the stored procedure DELIMITER $$ DROP PROCEDURE IF EXISTS `aceptartasacero` $$ CREATE DEFINER=`database`@`%` PROCEDURE `aceptartasacero`(pidMun INTEGER, pvalor double) BEGIN declare vExito BOOLEAN; INSERT INTO tasacero(anio,valor,idmunicipios) VALUES(YEAR(curdate()),pValor,pidMun); set vExito = true; select vExito; END $$ DELIMITER ; thanks.

    Read the article

  • How to trigger the event together on each two deferent class.

    - by XBasic3000
    I have two object class on a single unit, is it posible to trigger the two events? let say the FIRSTCLASS event is fired, The SECONDCLASS also will fired? Assuming...... //{Class 1}------------------------------------------------------------- type TOnEventTrigger = procedure(Sender : Tobject; Value :integer); TMyFirstClass = Class(Tcomponent) private .... public .... OnEventTrigger : TOnEventTrigger read Fevent write Fevent; end; procedure TMyFirstClass.FEvnt(Sender : Tobject; Value :integer); begin // here is normaly triggers the event // if Assigned(OnEventTrigger) then OnEventTrigger(Self,FSomevalue); // POSTMessage(GetForegroundWindow,WM_USER+3,0,0); // this is what i did here to get the result of FSomevalue // but this is not ideal. It work only on focus window. end; //{Class 2}------------------------------------------------------------- type TOnEventTrigger = procedure(Sender : Tobject; Value :integer); TMySecondClass = Class(Tobject) private .... public .... OnEventTrigger : TOnEventTrigger; read Fevent write Fevent; end; procedure TMySecondClass.FEvnt(Sender : Tobject; Value :integer); begin // I wanted here to trigger, whenenver the above is fired // if Assigned(OnEventTrigger) then OnEventTrigger(Self,FSomevalue); end;

    Read the article

  • Copying contents of a MySQL table to a table in another (local) database

    - by Philip Eve
    I have two MySQL databases for my site - one is for a production environment and the other, much smaller, is for a testing/development environment. Both have identical schemas (except when I am testing something I intend to change, of course). A small number of the tables are for internationalisation purposes: TransLanguage - non-English languages TransModule - modules (bundles of phrases for translation, that can be loaded individually by PHP scripts) TransPhrase - individual phrases, in English, for potential translation TranslatedPhrase - translations of phrases that are submitted by volunteers ChosenTranslatedPhrase - screened translations of phrases. The volunteers who do translation are all working on the production site, as they are regular users. I wanted to create a stored procedure that could be used to synchronise the contents of four of these tables - TransLanguage, TransModule, TransPhrase and ChosenTranslatedPhrase - from the production database to the testing database, so as to keep the test environment up-to-date and prevent "unknown phrase" errors from being in the way while testing. My first effort was to create the following procedure in the test database: CREATE PROCEDURE `SynchroniseTranslations` () LANGUAGE SQL NOT DETERMINISTIC MODIFIES SQL DATA SQL SECURITY DEFINER BEGIN DELETE FROM `TransLanguage`; DELETE FROM `TransModule`; INSERT INTO `TransLanguage` SELECT * FROM `PRODUCTION_DB`.`TransLanguage`; INSERT INTO `TransModule` SELECT * FROM `PRODUCTION_DB`.`TransModule`; INSERT INTO `TransPhrase` SELECT * FROM `PRODUCTION_DB`.`TransPhrase`; INSERT INTO `ChosenTranslatedPhrase` SELECT * FROM `PRODUCTION_DB`.`ChosenTranslatedPhrase`; END When I try to run this, I get an error message: "SELECT command denied to user 'username'@'localhost' for table 'TransLanguage'". I also tried to create the procedure to work the other way around (that is, to exist as part of the data dictionary for the production database rather than the test database). If I do it that, way, I get an identical message except it tells me I'm denied the DELETE command rather than SELECT. I have made sure that my user has INSERT, DELETE, SELECT, UPDATE and CREATE ROUTINE privileges on both databases. However, it seems as though MySQL is reluctant to let this user exercise its privileges on both databases at the same time. How come, and is there a way around this?

    Read the article

  • Something about Stream

    - by sforester
    I've been working on something that make use of streams and I found myself not clear about some stream concepts( you can also view another question posted by me at http://stackoverflow.com/questions/2933923/about-redirected-stdout-in-system-diagnostics-process ). 1.how do you indicate that you have finished writing a stream, writing something like a EOF? 2.follow the previous question, if I have written a EOF(or something like that) to a stream but didn't close the stream, then I want to write something else to the same stream, can I just start writing to it and no more set up required? 3.if a procedure tries to read a stream(like the stdin ) that no one has written anything to it, the reading procedure will be blocked,finally some data arrives and the procedure will just read till the writing is done,which is indicated by getting a return of 0 count of bytes read rather than being blocked, and now if the procedure issues another read to the same stream, it will still get a 0 count and return immediately while I was expecting it will be blocked since no one is writing to the stream now. So does the stream holds different states when the stream is opened but no one has written to it yet and when someone has finished a writing session? I'm using Windows the .net framework if there will by any thing platform specific. Thanks a lot!

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • temporary tables within stored procedures on slave servers with readonly set

    - by lau
    Hi, We have set up a replication scheme master/slave and we've had problems lately because some users wrote directly on the slave instead of the master, making the whole setup inconsistent. To prevent these problems from happening again, we've decided to remove the insert, delete, update, etc... rights from the users accessing the slave. Problems is that some stored procedure (for reading) require temporary tables. I read that changing the global variable read_only to true would do what I want and allow the stored procedures to work correctly ( http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_read_only ) but I keep getting the error : The MySQL server is running with the --read-only option so it cannot execute this statement (1290) The stored procedure that I used (for testing purpose) is this one : DELIMITER $$ DROP PROCEDURE IF EXISTS test_readonly $$ CREATE DEFINER=dbuser@% PROCEDURE test_readonly() BEGIN CREATE TEMPORARY TABLE IF NOT EXISTS temp ( BT_INDEX int(11), BT_DESC VARCHAR(10) ); INSERT INTO temp (BT_INDEX, BT_DESC) VALUES (222,'walou'), (111,'bidouille'); DROP TABLE temp; END $$ DELIMITER ; The create temporary table and the drop table work fine with the readonly flag - if I comment the INSERT line, it runs fine- but whenever I want to insert or delete from that temporary table, I get the error message. I use Mysql 5.1.29-rc. My default storage engine is InnoDB. Thanks in advance, this problem is really driving me crazy.

    Read the article

  • Database.ExecuteNonQuery does not return

    - by dan-waterbly
    I have a very odd issue. When I execute a specific database stored procedure from C# using SqlCommand.ExecuteNonQuery, my stored procedure is never executed. Furthermore, SQL Profiler does not register the command at all. I do not receive a command timeout, and no exeception is thrown. The weirdest thing is that this code has worked fine over 1,200,000 times, but for this one particular file I am inserting into the database, it just hangs forever. When I kill the application, I receive this error in the event log of the database server: "A fatal error occurued while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0). Which makes me think that the database server is receiving the command, though SQL Profiler says otherwise. I know that the appropiate permissions are set, and that the connection string is right as this piece of code and stored procedure works fine with other files. Below is the code that calls the stored procedure, it may be important to note that the file I am trying to insert is 33.5MB, but I have added more than 10,000 files larger than 500MB, so I do not think the size is the issue: using (SqlConnection sqlconn = new SqlConnection(ConfigurationManager.ConnectionStrings["TheDatabase"].ConnectionString)) using (SqlCommand command = sqlconn.CreateCommand()) { command.CommandText = "Add_File"; command.CommandType = CommandType.StoredProcedure; command.CommandTimeout = 30 //should timeout in 30 seconds, but doesn't... command.Parameters.AddWithValue("@ID", ID).SqlDbType = SqlDbType.BigInt; command.Parameters.AddWithValue("@BinaryData", byteArr).SqlDbType = SqlDbType.VarBinary; command.Parameters.AddWithValue("@FileName", fileName).SqlDbType = SqlDbType.VarChar; sqlconn.Open(); command.ExecuteNonQuery(); } There is no firewall between the server making the call and the database server, and the windows firewalls have been disabled to troubleshoot this issue.

    Read the article

  • When should we use Views, Temporary Tables and Direct Queries ? What are the Performance issues in a

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • Some questions about the .NET Entity Framework and Stored Procedures

    - by Bara
    Hey everyone, I had a couple of questions relating to the .NET Entity Framework and using stored procedures. Here goes: I know that we can right click a stored procedure and choose Function Import to be able to use with code. Is there a way to do this for many stored procedures at once? When doing a Function Import, I can create a new Complex type or use an existing Complex type. Well, how can I access Complex types/objects that are outside of the edmx file? That is, if I have a class in my project, is it possible to access it while doing a Function Import? When calling the stored procedure from code, it returns an IEnumerable of the Complex type I set it as. However, sometimes these complex types do not have all of the properties that I need, so I create a new class in my project that inherits from the complex type used in the stored procedure. Problem is, I can't seem to cast the complex type returned from the stored procedure to the new class I created. Any reason why I can't do this? What I ended up doing is looping through the IEnumerable and adding each item to a new list of the class that I created. But this feels and looks messy. Bara

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • What should I use to increase performance. View/Query/Temporary Table

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • T-SQL: Dynamic Query by Selected Column in ASP.NET GridView

    - by jp2code
    I'm trying to modify a stored procedure used in an ASP.NET page. By default, the stored procedure returns all of the data, which can be overwhelming for employees in the plant. I want to add a drop down menu item for the column name and a text box for a value to allow our employees to search the data for their specific items. What I would like to add is the ability to pass in a Column Name and Column Value to search, similar to the following: DECLARE @colName nVarChar(50), @colValue nVarChar(50) SET @colName='EmployeeID' SET @colValue='007135' SELECT Column1, Column2, Column3, Column4, Column5, Column6, Column7 FROM viewNum1 WHERE ((@colName IS NULL) OR (@colValue IS NULL) OR ('['+@colName+']'=@colValue)) If all values passed in (@colValue and @colName), all records return; however, if I try specifying that @colName=EmployeeID and @colValue='007135' (a value that does exist in the database), no records are returned. Next is the problem that I am running an old SQL Server 2000 database that does not allow the stored procedure to access the table column names, and the whole technique looks prone to SQL Injection. Finally, I don't see how to bind my GridView control to this and still have the ability to display all records. How would I write such a filtering stored procedure?

    Read the article

  • SQL Server 2008 Stored Proc suddenly returns -1

    - by aaginor
    I use the following stored procedure from my SQL Server 2008 database to return a value to my C#-Program ALTER PROCEDURE [dbo].[getArticleBelongsToCatsCount] @id int AS BEGIN SET NOCOUNT ON; DECLARE @result int; set @result = (SELECT COUNT(*) FROM art_in_cat WHERE child_id = @id); return @result; END I use a SQLCommand-Object to call this Stored Procedure public int ExecuteNonQuery() { try { return _command.ExecuteNonQuery(); } catch (Exception e) { Logger.instance.ErrorRoutine(e, "Text: " + _command.CommandText); return -1; } } Till recently, everything works fine. All of a sudden, the stored procedure returned -1. At first, I suspected, that the ExecuteNonQuery-Command would have caused and Exception, but when stepping through the function, it shows that no Exception is thrown and the return value comes directly from return _command.ExecuteNonQuery(); I checked following parameters and they were as expected: - Connection object was set to the correct database with correct access values - the parameter for the SP was there and contained the right type, direction and value Then I checked the SP via SQLManager, I used the same value for the parameter like the one for which my C# brings -1 as result (btw. I checked some more parameter values in my C' program and they ALL returned -1) but in the manager, the SP returns the correct value. It looks like the call from my C# prog is somehow bugged, but as I don't get any error (it's just the -1 from the SP), I have no idea, where to look for a solution.

    Read the article

  • Why is DivMod Limited to Words (<=65535)?

    - by Andreas Rejbrand
    In Delphi, the declaration of the DivMod function is procedure DivMod(Dividend: Cardinal; Divisor: Word; var Result, Remainder: Word); Thus, the divisor, result, and remainder cannot be grater than 65535, a rather severe limitation. Why is this? Why couldn't the delcaration be procedure DivMod(Dividend: Cardinal; Divisor: Cardinal; var Result, Remainder: Cardinal); The procedure is implemented using assembly, and is therefore probably extremely fast. Would it not be possible for the code PUSH EBX MOV EBX,EDX MOV EDX,EAX SHR EDX,16 DIV BX MOV EBX,Remainder MOV [ECX],AX MOV [EBX],DX POP EBX to be adapted to cardinals? How much slower is the naïve attempt procedure DivModInt(const Dividend: integer; const Divisor: integer; out result: integer; out remainder: integer); begin result := Dividend div Divisor; remainder := Dividend mod Divisor; end; that is not (?) limited to 16-bit integers?

    Read the article

  • Implementing search functionality with multiple optional parameters against database table.

    - by quarkX
    Hello, I would like to check if there is a preferred design pattern for implementing search functionality with multiple optional parameters against database table where the access to the database should be only via stored procedures. The targeted platform is .Net with SQL 2005, 2008 backend, but I think this is pretty generic problem. For example, we have customer table and we want to provide search functionality to the UI for different parameters, like customer Type, customer State, customer Zip, etc., and all of them are optional and can be selected in any combinations. In other words, the user can search by customerType only or by customerType, customerZIp or any other possible combinations. There are several available design approaches, but all of them have some disadvantages and I would like to ask if there is a preferred design among them or if there is another approach. Generate sql where clause sql statement dynamically in the business tier, based on the search request from the UI, and pass it to a stored procedure as parameter. Something like @Where = ‘where CustomerZip = 111111’ Inside the stored procedure generate dynamic sql statement and execute it with sp_executesql. Disadvantage: dynamic sql, sql injection Implement a stored procedure with multiple input parameters, representing the search fields from the UI, and use the following construction for selecting the records only for the requested fields in the where statement. WHERE (CustomerType = @CustomerType OR @CustomerType is null ) AND (CustomerZip = @CustomerZip OR @CustomerZip is null ) AND ………………………………………… Disadvantage: possible performance issue for the sql. 3.Implement separate stored procedure for each search parameter combinations. Disadvantage: The number of stored procedures will increase rapidly with the increase of the search parameters, repeated code.

    Read the article

  • '<=' operator is not working in sql server 2000

    - by Lalit
    Hello, Scenario is, database is in the maintenance phase. this database is not developed by ours developer. it is an existing database developed by the 'xyz' company in sql server 2000. This is real time database, where i am working now. I wanted to write the stored procedure which will retrieve me the records From date1 to date 2.so query is : Select * from MyTableName Where colDate>= '3-May-2010' and colDate<= '5-Oct-2010' and colName='xyzName' whereas my understanding I must get data including upper bound date as well as lower bound date. but somehow I am getting records from '3-May-2010' (which is fine but) to '10-Oct-2010' As i observe in table design , for ColDate, developer had used 'varchar' to store the date. i know this is wrong remedy by them. so in my stored procedure I have also used varchar parameters as @FromDate1 and @ToDate to get inputs for SP. this is giving me result which i have explained. i tried to take the parameter type as 'Datetime' but it is showing error while saving/altering the stored procedure that "@FromDate1 has invalid datatype", same for "@ToDate". situation is that, I can not change the table design at all. what i have to do here ? i know we can use user defined table in sql server 2008 , but there is version sql server 2000. which does not support the same. Please guide me for this scenario. **Edited** I am trying to write like this SP: CREATE PROCEDURE USP_Data (@Location varchar(100), @FromDate DATETIME, @ToDate DATETIME) AS SELECT * FROM dbo.TableName Where CAST(Dt AS DATETIME) >=@fromDate and CAST(Dt AS DATETIME)<=@ToDate and Location=@Location GO but getting Error: Arithmetic overflow error converting expression to data type datetime. in sql server 2000 What should be that ? is i am wrong some where ? also (202 row(s) affected) is changes every time in circular manner means first time sayin (122 row(s) affected) run again saying (80 row(s) affected) if again (202 row(s) affected) if again (122 row(s) affected) I can not understand what is going on ?

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #050

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Executing Remote Stored Procedure – Calling Stored Procedure on Linked Server In this example we see two different methods of how to call Stored Procedures remotely.  Connection Property of SQL Server Management Studio SSMS A very simple example of the how to build connection properties for SQL Server with the help of SSMS. Sample Example of RANKING Functions – ROW_NUMBER, RANK, DENSE_RANK, NTILE SQL Server has a total of 4 ranking functions. Ranking functions return a ranking value for each row in a partition. All the ranking functions are non-deterministic. T-SQL Script to Add Clustered Primary Key Jr. DBA asked me three times in a day, how to create Clustered Primary Key. I gave him following sample example. That was the last time he asked “How to create Clustered Primary Key to table?” 2008 2008 – TRIM() Function – User Defined Function SQL Server does not have functions which can trim leading or trailing spaces of any string at the same time. SQL does have LTRIM() and RTRIM() which can trim leading and trailing spaces respectively. SQL Server 2008 also does not have TRIM() function. User can easily use LTRIM() and RTRIM() together and simulate TRIM() functionality. http://www.youtube.com/watch?v=1-hhApy6MHM 2009 Earlier I have written two different articles on the subject Remove Bookmark Lookup. This article is as part 3 of original article. Please read the first two articles here before continuing reading this article. Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 Interesting Observation – Query Hint – FORCE ORDER SQL Server never stops to amaze me. As regular readers of this blog already know that besides conducting corporate training, I work on large-scale projects on query optimizations and server tuning projects. In one of the recent projects, I have noticed that a Junior Database Developer used the query hint Force Order; when I asked for details, I found out that the basic concept was not properly understood by him. Queries Waiting for Memory Allocation to Execute In one of the recent projects, I was asked to create a report of queries that are waiting for memory allocation. The reason was that we were doubtful regarding whether the memory was sufficient for the application. The following query can be useful in similar cases. Queries that do not have to wait on a memory grant will not appear in the result set of following query. 2010 Quickest Way to Identify Blocking Query and Resolution – Dirty Solution As the title suggests, this is quite a dirty solution; it’s not as elegant as you expect. However, it works totally fine. Simple Explanation of Data Type Precedence While I was working on creating a question for SQL SERVER – SQL Quiz – The View, The Table and The Clustered Index Confusion, I had actually created yet another question along with this question. However, I felt that the one which is posted on the SQL Quiz is much better than this one because what makes that more challenging question is that it has a multiple answer. Encrypted Stored Procedure and Activity Monitor I recently had received questionable if any stored procedure is encrypted can we see its definition in Activity Monitor.Answer is - No. Let us do a quick test. Let us create following Stored Procedure and then launch the Activity Monitor and check the text. Indexed View always Use Index on Table A single table can have maximum 249 non clustered indexes and 1 clustered index. In SQL Server 2008, a single table can have maximum 999 non clustered indexes and 1 clustered index. It is widely believed that a table can have only 1 clustered index, and this belief is true. I have some questions for all of you. Let us assume that I am creating view from the table itself and then create a clustered index on it. In my view, I am selecting the complete table itself. 2011 Detecting Database Case Sensitive Property using fn_helpcollations() I received a question on how to determine the case sensitivity of the database. The quick answer to this is to identify the collation of the database and check the properties of the collation. I have previously written how one can identify database collation. Once you have figured out the collation of the database, you can put that in the WHERE condition of the following T-SQL and then check the case sensitivity from the description. Server Side Paging in SQL Server CE (Compact Edition) SQL Server Denali is coming up with new T-SQL of Paging. I have written about the same earlier.SQL SERVER – Server Side Paging in SQL Server Denali – A Better Alternative,  SQL SERVER – Server Side Paging in SQL Server Denali Performance Comparison, SQL SERVER – Server Side Paging in SQL Server Denali – Part2 What is very interesting is that SQL Server CE 4.0 have the same feature introduced. Here is the quick example of the same. To run the script in the example, you will have to do installWebmatrix 4.0 and download sample database. Once done you can run following script. Why I am Going to Attend PASS Summit Unite 2011 The four-day event will be marked by a lot of learning, sharing, and networking, which will help me increase both my knowledge and contacts. Every year, PASS Summit provides me a golden opportunity to build my network as well as to identify and meet potential customers or employees. 2012 Manage Help Settings – CTRL + ALT + F1 This is very interesting read as my daughter once accidently came across a screen in SQL Server Management Studio. It took me 2-3 minutes to figure out how she has created the same screen. Recover the Accidentally Renamed Table “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you an idea about what your table name can be if you are lucky. Identify Numbers of Non Clustered Index on Tables for Entire Database Here is the script which will give you numbers of non clustered indexes on any table in entire database. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video Here is the complete complete script which I have used in the SQL in Sixty Seconds Video. Thanks Harsh for important Tip in the comment. http://www.youtube.com/watch?v=3kDHC_Tjrns Advanced Data Quality Services with Melissa Data – Azure Data Market For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >