Search Results

Search found 9299 results on 372 pages for 'policy and procedure manual'.

Page 55/372 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • locked stored procedures in sql

    - by Greg
    Hi, I am not too familiar with sql server 2005. I have a schema in sql which has stored procedures with small lock on them. As I understand they were created using C#, all these locked procedures have a source file in C# with the code of the procedures. The thing is I can't access them. I need to modify one of these procedures but it doesn't let me modify them. I have the source code (from visual studio) with these procedures but when I change something in the code, it doesn't affect the procedures in the sql. How can I change the path to assembly in sql server 2005 or is there any other way I can access these stored procedures? Thanks in advance, Greg

    Read the article

  • What's your release process for your commercial application?

    - by dr. evil
    If you are developing a commercial desktop application, what's your release process? Sample process: Develop it: Patch bugs, add features, etc. Feature Freeze (do not fix, add anything unless it's absolutely required) Test it If everything is OK release it, if it's not fix it, test it, release it I think the most crucial question is what's your approach to "feature freeze test release" cycle? Or do you test it more frequently that you don't need such a cycle and your software is always ready for public release?

    Read the article

  • Searching a column containing CSV data in a MySQL table for existence of input values

    - by Adarsh R
    Hi, I have a table say, ITEM, in MySQL that stores data as follows: ID FEATURES -------------------- 1 AB,CD,EF,XY 2 PQ,AC,A3,B3 3 AB,CDE 4 AB1,BC3 -------------------- As an input, I will get a CSV string, something like "AB,PQ". I want to get the records that contain AB or PQ. I realized that we've to write a MySQL function to achieve this. So, if we have this magical function MATCH_ANY defined in MySQL that does this, I would then simply execute an SQL as follows: select * from ITEM where MATCH_ANY(FEAURES, "AB,PQ") = 0 The above query would return the records 1, 2 and 3. But I'm running into all sorts of problems while implementing this function as I realized that MySQL doesn't support arrays and there's no simple way to split strings based on a delimiter. Remodeling the table is the last option for me as it involves lot of issues. I might also want to execute queries containing multiple MATCH_ANY functions such as: select * from ITEM where MATCH_ANY(FEATURES, "AB,PQ") = 0 and MATCH_ANY(FEATURES, "CDE") In the above case, we would get an intersection of records (1, 2, 3) and (3) which would be just 3. Any help is deeply appreciated. Thanks

    Read the article

  • Teamcity NTLM Authentication change - admin user lost in transition

    - by hawkeye
    I've switched in teamcity from using basic authentication to using NTLM, on an existing installation. This works fine, except that the admin user didn't have a corresponding NT account, and so doesn't work on the NTLM configuration. (It is easy to roll back, so it is not a stress). My question is - what is the command to set a user to admin manually - ie modifying the database? (like this: TeamCity forgotten admin password - where to look?) but changing the role of a user to global system administrator.

    Read the article

  • Oracle manually add an FK constraint

    - by Oxymoron
    Alright, since a client wants to automate a certain process, which includes creating a new key structure in a LIVE database, I need to create relations between tables.columns. Now I've found the tables ALL_CONS_COLS en USER_CONSTRAINTS to hold information about constraints. If I were to manually create constraints, by inserting into these tables, I should be able to recreate the original constraints. My question: are there any more tables I should look into? Do you have an alternate suggestions, as this sounds VERY dirty and error prone to begin with. Current modus operandi: Create a new column in each table for the PK; Generate a guid for this PK; Create a new column in each table for the FKs; Fetch the guid associated with the FK; ....... done sofar...... Add new constraint based on the old one; Remove old constraint; Rename new columns; This is kind of dodgy and I'd rather change my method, any ideas would be helpful. To put it different, client wants to change key structure from int to guid on a live database. What's the best way to approach this

    Read the article

  • MySQL: How do I combine a Stored procedure with another Function?

    - by Laxmidi
    Hi I need some help in combining a stored procedure with another function. I've got a stored procedure that pulls latitudes and longitudes from a database. I've got another function that checks whether a point is inside a polygon. My goal is to combine the two functions, so that I can check whether the latitude and longitude points pulled from the db are inside a specific area. This stored procedure pulls latitude and longitudes from the database based on offense: DROP PROCEDURE IF EXISTS latlongGrabber; DELIMITER $$ CREATE PROCEDURE latlongGrabber(IN offense_in VARCHAR(255)) BEGIN DECLARE latitude_val VARCHAR(255); DECLARE longitude_val VARCHAR(255); DECLARE no_more_rows BOOLEAN; DECLARE latlongGrabber_cur CURSOR FOR SELECT latitude, longitude FROM myTable WHERE offense = offense_in; DECLARE CONTINUE HANDLER FOR NOT FOUND SET no_more_rows = TRUE; OPEN latlongGrabber_cur; the_loop: LOOP FETCH latlongGrabber_cur INTO latitude_val, longitude_val; IF no_more_rows THEN CLOSE latlongGrabber_cur; LEAVE the_loop; END IF; SELECT latitude_val, longitude_val; END LOOP the_loop; END $$ DELIMITER ; This function checks whether a point is inside a polygon. I'd like the function to test the points produced by the procedure. I can hard-code the polygon for now. (Once, I know how to combine these two functions, I'll use the same pattern to pull the polygons from the database). DROP FUNCTION IF EXISTS myWithin; DELIMITER $$ CREATE FUNCTION myWithin(p POINT, poly POLYGON) RETURNS INT(1) DETERMINISTIC BEGIN DECLARE n INT DEFAULT 0; DECLARE pX DECIMAL(9,6); DECLARE pY DECIMAL(9,6); DECLARE ls LINESTRING; DECLARE poly1 POINT; DECLARE poly1X DECIMAL(9,6); DECLARE poly1Y DECIMAL(9,6); DECLARE poly2 POINT; DECLARE poly2X DECIMAL(9,6); DECLARE poly2Y DECIMAL(9,6); DECLARE i INT DEFAULT 0; DECLARE result INT(1) DEFAULT 0; SET pX = X(p); SET pY = Y(p); SET ls = ExteriorRing(poly); SET poly2 = EndPoint(ls); SET poly2X = X(poly2); SET poly2Y = Y(poly2); SET n = NumPoints(ls); WHILE i<n DO SET poly1 = PointN(ls, (i+1)); SET poly1X = X(poly1); SET poly1Y = Y(poly1); IF ( ( ( ( poly1X <= pX ) && ( pX < poly2X ) ) || ( ( poly2X <= pX ) && ( pX < poly1X ) ) ) && ( pY > ( poly2Y - poly1Y ) * ( pX - poly1X ) / ( poly2X - poly1X ) + poly1Y ) ) THEN SET result = !result; END IF; SET poly2X = poly1X; SET poly2Y = poly1Y; SET i = i + 1; END WHILE; RETURN result; End $$ DELIMITER ; This function is called as follows: SET @point = PointFromText('POINT(5 5)') ; SET @polygon = PolyFromText('POLYGON((0 0, 0 10, 10 10, 10 0, 0 0))'); SELECT myWithin(@point, @polygon) AS result I've tested the stored procedure and the function and they work well. I just have to figure out how to combine them. I'd like to call the procedure with the offense parameter and have it test all of the latitudes and longitudes pulled from the database to see whether they are inside or outside of the polygon. Any advice or suggestions? Thank you. -Laxmidi

    Read the article

  • Sending one record from cursor to another function Postgres

    - by PylonsN00b
    FYI: I am completely new to using cursors... So I have one function that is a cursor: CREATE FUNCTION get_all_product_promos(refcursor, cursor_object_id integer) RETURNS refcursor AS ' BEGIN OPEN $1 FOR SELECT * FROM promos prom1 JOIN promo_objects ON (prom1.promo_id = promo_objects.promotion_id) WHERE prom1.active = true AND now() BETWEEN prom1.start_date AND prom1.end_date AND promo_objects.object_id = cursor_object_id UNION SELECT prom2.promo_id FROM promos prom2 JOIN promo_buy_objects ON (prom2.promo_id = promo_buy_objects.promo_id) LEFT JOIN promo_get_objects ON prom2.promo_id = promo_get_objects.promo_id WHERE (prom2.buy_quantity IS NOT NULL OR prom2.buy_quantity > 0) AND prom2.active = true AND now() BETWEEN prom2.start_date AND prom2.end_date AND promo_buy_objects.object_id = cursor_object_id; RETURN $1; END; ' LANGUAGE plpgsql; SO then in another function I call it and need to process it: ... --Get the promotions from the cursor SELECT get_all_product_promos('promo_cursor', this_object_id) updated := FALSE; IF FOUND THEN --Then loop through your results LOOP FETCH promo_cursor into this_promotion --Preform comparison logic -this is necessary as this logic is used in other contexts from other functions SELECT * INTO best_promo_results FROM get_best_product_promos(this_promotion, this_object_id, get_free_promotion, get_free_promotion_value, current_promotion_value, current_promotion); ... SO the idea here is to select from the cursor, loop using fetch (next is assumed correct?) and put the record fetched into this_promotion. Then send the record in this_promotion to another function. I can't figure out what to declare the type of this_promotion in get_best_product_promos. Here is what I have: CREATE OR REPLACE FUNCTION get_best_product_promos(this_promotion record, this_object_id integer, get_free_promotion integer, get_free_promotion_value numeric(10,2), current_promotion_value numeric(10,2), current_promotion integer) RETURNS... It tells me: ERROR: plpgsql functions cannot take type record OK first I tried: CREATE OR REPLACE FUNCTION get_best_product_promos(this_promotion get_all_product_promos, this_object_id integer, get_free_promotion integer, get_free_promotion_value numeric(10,2), current_promotion_value numeric(10,2), current_promotion integer) RETURNS... Because I saw some syntax in the Postgres docs showed a function being created w/ a input parameter that had a type 'tablename' this works, but it has to be a tablename not a function :( I know I am so close, I was told to use cursors to pass records around. So I studied up. Please help.

    Read the article

  • Using Table-Valued Parameters in SQL Server

    - by Jesse
    I work with stored procedures in SQL Server pretty frequently and have often found myself with a need to pass in a list of values at run-time. Quite often this list contains a set of ids on which the stored procedure needs to operate the size and contents of which are not known at design time. In the past I’ve taken the collection of ids (which are usually integers), converted them to a string representation where each value is separated by a comma and passed that string into a VARCHAR parameter of a stored procedure. The body of the stored procedure would then need to parse that string into a table variable which could be easily consumed with set-based logic within the rest of the stored procedure. This approach works pretty well but the VARCHAR variable has always felt like an un-wanted “middle man” in this scenario. Of course, I could use a BULK INSERT operation to load the list of ids into a temporary table that the stored procedure could use, but that approach seems heavy-handed in situations where the list of values is usually going to contain only a few dozen values. Fortunately SQL Server 2008 introduced the concept of table-valued parameters which effectively eliminates the need for the clumsy middle man VARCHAR parameter. Example: Customer Transaction Summary Report Let’s say we have a report that can summarize the the transactions that we’ve conducted with customers over a period of time. The report returns a pretty simple dataset containing one row per customer with some key metrics about how much business that customer has conducted over the date range for which the report is being run. Sometimes the report is run for a single customer, sometimes it’s run for all customers, and sometimes it’s run for a handful of customers (i.e. a salesman runs it for the customers that fall into his sales territory). This report can be invoked from a website on-demand, or it can be scheduled for periodic delivery to certain users via SQL Server Reporting Services. Because the report can be created from different places and the query to generate the report is complex it’s been packed into a stored procedure that accepts three parameters: @startDate – The beginning of the date range for which the report should be run. @endDate – The end of the date range for which the report should be run. @customerIds – The customer Ids for which the report should be run. Obviously, the @startDate and @endDate parameters are DATETIME variables. The @customerIds parameter, however, needs to contain a list of the identity values (primary key) from the Customers table representing the customers that were selected for this particular run of the report. In prior versions of SQL Server we might have made this parameter a VARCHAR variable, but with SQL Server 2008 we can make it into a table-valued parameter. Defining And Using The Table Type In order to use a table-valued parameter, we first need to tell SQL Server about what the table will look like. We do this by creating a user defined type. For the purposes of this stored procedure we need a very simple type to model a table variable with a single integer column. We can create a generic type called ‘IntegerListTableType’ like this: CREATE TYPE IntegerListTableType AS TABLE (Value INT NOT NULL) Once defined, we can use this new type to define the @customerIds parameter in the signature of our stored procedure. The parameter list for the stored procedure definition might look like: 1: CREATE PROCEDURE dbo.rpt_CustomerTransactionSummary 2: @starDate datetime, 3: @endDate datetime, 4: @customerIds IntegerListTableTableType READONLY   Note the ‘READONLY’ statement following the declaration of the @customerIds parameter. SQL Server requires any table-valued parameter be marked as ‘READONLY’ and no DML (INSERT/UPDATE/DELETE) statements can be performed on a table-valued parameter within the routine in which it’s used. Aside from the DML restriction, however, you can do pretty much anything with a table-valued parameter as you could with a normal TABLE variable. With the user defined type and stored procedure defined as above, we could invoke like this: 1: DECLARE @cusomterIdList IntegerListTableType 2: INSERT @customerIdList VALUES (1) 3: INSERT @customerIdList VALUES (2) 4: INSERT @customerIdList VALUES (3) 5:  6: EXEC dbo.rpt_CustomerTransationSummary 7: @startDate = '2012-05-01', 8: @endDate = '2012-06-01' 9: @customerIds = @customerIdList   Note that we can simply declare a variable of type ‘IntegerListTableType’ just like any other normal variable and insert values into it just like a TABLE variable. We could also populate the variable with a SELECT … INTO or INSERT … SELECT statement if desired. Using The Table-Valued Parameter With ADO .NET Invoking a stored procedure with a table-valued parameter from ADO .NET is as simple as building a DataTable and passing it in as the Value of a SqlParameter. Here’s some example code for how we would construct the SqlParameter for the @customerIds parameter in our stored procedure: 1: var customerIdsParameter = new SqlParameter(); 2: customerIdParameter.Direction = ParameterDirection.Input; 3: customerIdParameter.TypeName = "IntegerListTableType"; 4: customerIdParameter.Value = selectedCustomerIds.ToIntegerListDataTable("Value");   All we’re doing here is new’ing up an instance of SqlParameter, setting the pamameters direction, specifying the name of the User Defined Type that this parameter uses, and setting its value. We’re assuming here that we have an IEnumerable<int> variable called ‘selectedCustomerIds’ containing all of the customer Ids for which the report should be run. The ‘ToIntegerListDataTable’ method is an extension method of the IEnumerable<int> type that looks like this: 1: public static DataTable ToIntegerListDataTable(this IEnumerable<int> intValues, string columnName) 2: { 3: var intergerListDataTable = new DataTable(); 4: intergerListDataTable.Columns.Add(columnName); 5: foreach(var intValue in intValues) 6: { 7: var nextRow = intergerListDataTable.NewRow(); 8: nextRow[columnName] = intValue; 9: intergerListDataTable.Rows.Add(nextRow); 10: } 11:  12: return intergerListDataTable; 13: }   Since the ‘IntegerListTableType’ has a single int column called ‘Value’, we pass that in for the ‘columnName’ parameter to the extension method. The method creates a new single-columned DataTable using the provided column name then iterates over the items in the IEnumerable<int> instance adding one row for each value. We can then use this SqlParameter instance when invoking the stored procedure just like we would use any other parameter. Advanced Functionality Using passing a list of integers into a stored procedure is a very simple usage scenario for the table-valued parameters feature, but I’ve found that it covers the majority of situations where I’ve needed to pass a collection of data for use in a query at run-time. I should note that BULK INSERT feature still makes sense for passing large amounts of data to SQL Server for processing. MSDN seems to suggest that 1000 rows of data is the tipping point where the overhead of a BULK INSERT operation can pay dividends. I should also note here that table-valued parameters can be used to deal with more complex data structures than single-columned tables of integers. A User Defined Type that backs a table-valued parameter can use things like identities and computed columns. That said, using some of these more advanced features might require the use the SqlDataRecord and SqlMetaData classes instead of a simple DataTable. Erland Sommarskog has a great article on his website that describes when and how to use these classes for table-valued parameters. What About Reporting Services? Earlier in the post I referenced the fact that our example stored procedure would be called from both a web application and a SQL Server Reporting Services report. Unfortunately, using table-valued parameters from SSRS reports can be a bit tricky and warrants its own blog post which I’ll be putting together and posting sometime in the near future.

    Read the article

  • What would a start-to-finish development procedure would look like?

    - by Tom Busby
    I have a problem that my developer friends share. We recently left university and find ourselves either end up working for a firm which already has good procedures (TDD, automated testing, proper agile development, etc) or working for a firm which doesn't. I want to learn some of these vital skills and get a grip on what a complete start-to-finish development procedure would look like. What differences would be between a smaller project, and a long term project with many team members.

    Read the article

  • Windows 7 accusé de violation de brevet, la procédure est initiée par le même cabinet que celui de l

    Windows 7 en justice pour violation de brevet La procédure est initiée par le même cabinet que celui de l'affaire Word Le cabinet d'avocats McKool Smith of Dallas semble s'être spécialisé dans les procédures contre Microsoft. Après avoir défendu avec succès la société i4i dans le dossier Word, c'est au tour de VirnetX de faire appel a ses services. VirnetX n'en est pas à sa première poursuite contre Microsoft. Il vient en effet de gagner un procès pour la violation d'un de ses brevets utilisés da...

    Read the article

  • Using stored procedure to call multiple packages at the same time from SSIS Catalog (SSISDB.catalog.start_execution) resulted in deadlock

    - by Kevin Shyr
    Refer to my previous post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx) about dynamic package calling and multiple packages execution in these posts: I only saw this twice, other times the stored procedure was able to call the packages successfully.  After the service pack, I haven't seen it...yet. http://support.microsoft.com/kb/2699720

    Read the article

  • Google Web Toolkit Deferred Binding Issue

    - by snctln
    I developed a web app using GWT about 2 years ago, since then the application has evolved. In its current state it relies on fetching a single XML file and parsing the information from it. Overall this works great. A requirement of this app is that it needs to be able to be ran from the filesystem (file:///..) as well as the traditional model of running from a webserver (http://...) Fetching this file from a webserver works exactly as expected using a RequestBuilder object. When running the app from the filesystem Firefox, Opera, Safari, and Chrome all behave as expected. When running the app from the filesystem using IE7 or IE8 the RequestBuilder.send() call fails, the information about the error suggests that there is a problem accessing the file due to violating the same origin policy. The app worked as expected in IE6 but not in IE7 or IE8. So I looked at the source code of RequestBuilder.java and saw that the actual request was being executed with an XMLHttpRequest GWT object. So I looked at the source code for XMLHttpRequest.java and found out some information. Here is the code (starts at line 83 in XMLHttpRequest.java) public static native XMLHttpRequest create() /*-{ if ($wnd.XMLHttpRequest) { return new XMLHttpRequest(); } else { try { return new ActiveXObject('MSXML2.XMLHTTP.3.0'); } catch (e) { return new ActiveXObject("Microsoft.XMLHTTP"); } } }-*/; So basically if an XMLHttpRequest cannot be created (like in IE6 because it is not available) an ActiveXObject is used instead. I read up a little bit more on the IE implementation of XMLHttpRequest, and it appears that it is only supported for interacting with files on a webserver. I found a setting in IE8 (Tools-Internet Options-Advanced-Security-Enable native XMLHTTP support), when I uncheck this box my app works. I assume this is because I am more of less telling IE to not use their implementation of XmlHttpRequest, so GWT just uses an ActiveXObject because it doesn't think the native XmlHttpRequest is available. This fixes the problem, but is hardly a long term solution. I can currently catch a failed send request and verify that it was trying to fetch the XML file from the filesystem using normal GWT. What I would like to do in this case is catch the IE7 and IE8 case and have them use a ActiveXObject instead of a native XmlHttpRequest object. There was a posting on the GWT google group that had a supposed solution for this problem (link). Looking at it I can tell that it was created for an older version of GWT. I am using the latest release and think that this is more or less what I would like to do (use GWT deferred binding to detect a specific browser type and run my own implementation of XMLHttpRequest.java in place of the built in GWT implementation). Here is the code that I am trying to use package com.mycompany.myapp.client; import com.google.gwt.xhr.client.XMLHttpRequest; public class XMLHttpRequestIE7or8 extends XMLHttpRequest { // commented out the "override" so that eclipse and the ant build script don't throw errors //@Override public static native XMLHttpRequest create() /*-{ try { return new ActiveXObject('MSXML2.XMLHTTP.3.0'); } catch (e) { return new ActiveXObject("Microsoft.XMLHTTP"); } }-*/; // have an empty protected constructor so the ant build script doesn't throw errors // the actual XMLHttpRequest constructor is empty as well so this shouldn't cause any problems protected XMLHttpRequestIE7or8() { } }; And here are the lines that I added to my module xml <replace-with class="com.mycompany.myapp.client.XMLHttpRequestIE7or8"> <when-type-is class="com.google.gwt.xhr.client.XMLHttpRequest"/> <any> <when-property-is name="user.agent" value="ie7" /> <when-property-is name="user.agent" value="ie8" /> </any> </replace-with> From what I can tell this should work, but my code never runs. Does anyone have any idea of what I am doing wrong? Should I not do this via deferred binding and just use native javascript when I catch the fail case instead? Is there a different way of approaching this problem that I have not mentioned? All replies are welcome.

    Read the article

  • how to design this relation in a DB schema

    - by raticulin
    I have a table Car in my db, one of the columns is purchaseDate. I want to be able to tag every car with a number of Policies (limited to 10 policies). Each policy has a time to life (ttl, a duration of time, like '5 years', '10 months' etc), that is, for how long since the car's purchaseDate the policy can be applied. I need to perform the following actions: when inserting a Car, it will be set with a number of Policies (at least one is set) sometimes a Car will be updated to add/remove a Policy searches must be done taking into account date/policies, for example: 'select all cars that are not covered by any policy as of today' My current design is (pol0..pol9 are the policies): CREATE TABLE Car ( id int NOT NULL IDENTITY(1,1), purchaseDate datetime NOT NULL, //more stuff... pol0 smallint default NULL, pol1 smallint default NULL, pol2 smallint default NULL, pol3 smallint default NULL, pol4 smallint default NULL, pol5 smallint default NULL, pol6 smallint default NULL, pol7 smallint default NULL, pol8 smallint default NULL, pol9 smallint default NULL, PRIMARY KEY (id) ) CREATE TABLE Policy ( id smallint NOT NULL, name varchar(50) collate Latin1_General_BIN NOT NULL, ttl varchar(100) collate Latin1_General_BIN NOT NULL, PRIMARY KEY (id) ) The problem I am facing is that the sql to perform the query above is a nightmare to write. As I don't know in which column each policy can be, so I have to check all columns for every policy etc etc. So I am wondering wether it is worth changing this. My questions are: The smallint as Policy id was chosen instead of an 'int IDENTITY' in order to save some space as there are going to be millions of Car records. It just adds complexity when creating a Policy as we must handle the id etc. Was it worth doing this? I am thinking that maybe there is a much better design? Obviously we could move the policy/car relation to its own table CarPolicy, benefits would be: no limit on 10 policies per car adding/removing etc much easier when only the default policy is applied (when no others are applied one called Default policy is applied), we could signal that by not having any entry in CarPolicy, now this is just done inserting the Default policy id in one of the columns. The cons are that we would need to change the DB, ORM classes etc. What would you recommend? Maybe there is another smart way to implement this that we are not aware without using the CarPolicy table?

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • Advantages of Hudson and Sonar over manual process or homegrown scripts.

    - by Tom G
    My coworker and I recently got into a debate over a proposed plan at our workplace. We've more or less finished transitioning our Java codebase into one managed and built with Maven. Now, I'd like for us to integrate with Hudson and Sonar or something similar. My reasons for this are that it'll provide a 'zero-click' build step to provide testers with new experimental builds, that it will let us deploy applications to a server more easily, that tools such as Sonar will provide us with well-needed metrics on code coverage, Javadoc, package dependencies and the like. He thinks that the overhead of getting up to speed with two new frameworks is unacceptable, and that we should simply double down on documentation and create our own scripts for deployment. Since we plan on some aggressive rewrites to pay down the technical debt previous developers incurred (gratuitous use of Java's Serializable interface as a file storage mechanism that has predictably bit us in the ass) he argues that we can document as we go, and that we'll end up changing a large swath of code in the process anyways. I contend that having accurate metrics that Sonar (or fill in your favorite similar tool) provide gives us a good place to start for any refactoring efforts, not to mention general maintenance -- after all, knowing which classes are the most poorly documented, even if it's just a starting point, is better than seat-of-the-pants guessing. Am I wrong, and trying to introduce more overhead than we really need? Some more background: an alumni of our company is working at a Navy research lab now and suggested these two tools in particular as one they've had great success with using. My coworker and I have also had our share of friendly disagreements before -- he's more of the "CLI for all, compiles Gentoo in his spare time and uses Git" and I'm more of a "Give me an intuitive GUI, plays with XNA and is fine with SVN" type, so there's definitely some element of culture clash here.

    Read the article

  • IPsec Policy Agent flip-flopping demand start/auto start in Windows Server 2008?

    - by Steve Wortham
    Looking through the event logs on my web server I noticed a strange pattern. The following events have been occurring over and over again, always back to back: The start type of the IPsec Policy Agent service was changed from demand start to auto start. The start type of the IPsec Policy Agent service was changed from auto start to demand start. Each one produces event id 7040 from the Service Control Manager. And sometimes this will happen 20 times in one minute. Any idea what would cause this? I've been trying to pinpoint an intermittent performance problem for the past several days and this is the most peculiar thing I've found so far. I'm running Windows Server 2008, SQL Server 2008, and ASP.NET 3.5 w/ MVC 1.

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • How Can I Override the Remote Administrator security policy on Android 2.2 so that I can disable the lock screen?

    - by hagope
    On Android 2.2 Froyo, I added my Corporate exchange email account to the phone, however, the security policy set by the "remote administer" requires that I enter a 4-digit PIN at the lock screen and a maximum 10sec idle. How can I hack my Android, through root access or otherwise, such that I do not need to follow this security policy. I am very annoyed at having to enter the PIN every time I want to use the phone, because I open/close it so often through out the day? Please help...I'm so surprised at how difficult it is to find the answer!

    Read the article

  • Is there a good reference manual for ruby/rails?

    - by Kevin
    I've found switching from Java to Ruby/Rails to be very difficult. I feel like the rails books and websites that I've seen are program by example, and I have yet to see anything like a complete reference. In the java/spring world there is plenty of examples but also very thorough reference manuals. So even though I can get toy application xyz up and running in an afternoon with rails I'm apprehensive about doing anything of significance. I'm willing to admit that maybe this is because I've done java/spring for a few years and have near zero experience with ruby/rails. Just wondering if anyone else has run into this or if I'm missing something.

    Read the article

  • What is the package name for JVM?

    - by JohnMerlino
    If I want to know if skype is installed, I would type this: viggy@ubuntu:~$ apt-cache policy skype skype:i386: Installed: 4.0.0.8-1 Candidate: 4.0.0.8-1 Version table: *** 4.0.0.8-1 0 100 /var/lib/dpkg/status Or if Eclipse is installed: viggy@ubuntu:~$ apt-cache policy eclipse eclipse: Installed: (none) Candidate: 3.7.2-1 Version table: 3.7.2-1 0 But let's say I want to know if the Java Virtual Machine is installed. How would I know what to pass to apt-cache policy? For example, you might not know what to pass to apt-cache policy for some programs: viggy@ubuntu:~$ apt-cache policy java N: Unable to locate package java viggy@ubuntu:~$ apt-cache policy JVM N: Unable to locate package JVM

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >