Search Results

Search found 3097 results on 124 pages for 'ben alter'.

Page 119/124 | < Previous Page | 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Can you pass by reference in Java?

    - by dbones
    Hi. Sorry if this sounds like a newbie question, but the other day a Java developer mentioned about passing a paramter by reference (by which it was ment just pass a Reference object) From a C# perspective I can pass a reference type by value or by reference, this is also true to value types I have written a noddie console application to show what i mean.. can i do this in Java? namespace ByRefByVal { class Program { static void Main(string[] args) { //Creating of the object Person p1 = new Person(); p1.Name = "Dave"; PrintIfObjectIsNull(p1); //should not be null //A copy of the Reference is made and sent to the method PrintUserNameByValue(p1); PrintIfObjectIsNull(p1); //the actual reference is passed to the method PrintUserNameByRef(ref p1); //<-- I know im passing the Reference PrintIfObjectIsNull(p1); Console.ReadLine(); } private static void PrintIfObjectIsNull(Object o) { if (o == null) { Console.WriteLine("object is null"); } else { Console.WriteLine("object still references something"); } } /// <summary> /// this takes in a Reference type of Person, by value /// </summary> /// <param name="person"></param> private static void PrintUserNameByValue(Person person) { Console.WriteLine(person.Name); person = null; //<- this cannot affect the orginal reference, as it was passed in by value. } /// <summary> /// this takes in a Reference type of Person, by reference /// </summary> /// <param name="person"></param> private static void PrintUserNameByRef(ref Person person) { Console.WriteLine(person.Name); person = null; //this has access to the orginonal reference, allowing us to alter it, either make it point to a different object or to nothing. } } class Person { public string Name { get; set; } } } If it java cannot do this, then its just passing a reference type by value? (is that fair to say) Many thanks Bones

    Read the article

  • How to have member variables and public methods in a jQuery plugin?

    - by user169867
    I'm trying to create a jQuery plugin that will create something like an autoCompleteBox but with custom features. How do I store member variables for each matching jQuery element? For example I'll need to store a timerID for each. I'd also like to store references to some of the DOM elements that make up the control. I'd like to be able to make a public method that works something like: $("#myCtrl").autoCompleteEx.addItem("1"); But in the implementation of addItem() how can I access the member variables for that particular object like its timerID or whatever? Below is what I have so far... Thanks for any help or suggestions! (function($) { //Attach this new method to jQuery $.fn.autoCompleteEx = function(options) { //Merge Given Options W/ Defaults, But Don't Alter Either var opts = $.extend({}, $.fn.autoCompleteEx.defaults, options); //Iterate over the current set of matched elements return this.each(function() { var acx = $(this); //Get JQuery Version Of Element (Should Be Div) //Give Div Correct Class & Add <ul> w/ input item to it acx.addClass("autoCompleteEx"); acx.html("<ul><li class=\"input\"><input type=\"text\"/></li></ul>"); //Grab Input As JQ Object var input = $("input", acx); //Wireup Div acx.click(function() { input.focus().val( input.val() ); }); //Wireup Input input.keydown(function(e) { var kc = e.keyCode; if(kc == 13) //Enter { } else if(kc == 27) //Esc { } else { //Resize TextArea To Input var width = 50 + (_txtArea.val().length*10); _txtArea.css("width", width+"px"); } }); }); //End Each JQ Element }; //End autoCompleteEx() //Private Functions function junk() { }; //Public Functions $.fn.autoCompleteEx.addItem = function(id,txt) { var x = this; var y = 0; }; //Default Settings $.fn.autoCompleteEx.defaults = { minChars: 2, delay: 300, maxItems: 1 }; //End Of Closure })(jQuery);

    Read the article

  • Execution Plan Optimization when where clause is removed then added back

    - by nmushov
    I have a stored procedure that uses a table valued function which executes in 9 seconds. If I alter the table valued function and remove the where clause, the stored procedure executes in 3 seconds. If I add the where clause back, the query still executes in 3 seconds. I took a look at the execution plans and it appears that after I remove the where clause, the execution plan includes parallelism and the scan count for 2 of my tables drops for 50000 and 65000 down to 5 and 3. After I add the where clause back, the optimized execution plan still runs unless I run DBCC FREEPROCCACHE. Questions 1. Why would SQL Server start using the optimized execution plan for both queries only when I first remove the where clause? Is there a way to force SQL Server to use this execution plan? Also, this is a paramaterized all-in-one query that uses the (Parameter is null or Parameter) in the where clause, which I believe is bad for performance. RETURNS TABLE AS RETURN ( SELECT TOP (@PageNumber * @PageSize) CASE WHEN @SortOrder = 'Expensive' THEN ROW_NUMBER() OVER (ORDER BY SellingPrice DESC) WHEN @SortOrder = 'Inexpensive' THEN ROW_NUMBER() OVER (ORDER BY SellingPrice ASC) WHEN @SortOrder = 'LowMiles' THEN ROW_NUMBER() OVER (ORDER BY Mileage ASC) WHEN @SortOrder = 'HighMiles' THEN ROW_NUMBER() OVER (ORDER BY Mileage DESC) WHEN @SortOrder = 'Closest' THEN ROW_NUMBER() OVER (ORDER BY P1.Distance ASC) WHEN @SortOrder = 'Newest' THEN ROW_NUMBER() OVER (ORDER BY [Year] DESC) WHEN @SortOrder = 'Oldest' THEN ROW_NUMBER() OVER (ORDER BY [Year] ASC) ELSE ROW_NUMBER() OVER (ORDER BY InventoryID ASC) END as rn, P1.InventoryID, P1.SellingPrice, P1.Distance, P1.Mileage, Count(*) OVER () RESULT_COUNT, dimCarStatus.[year] FROM (SELECT InventoryID, SellingPrice, Zip.Distance, Mileage, ColorKey, CarStatusKey, CarKey FROM facInventory JOIN @ZipCodes Zip ON Zip.DealerKey = facInventory.DealerKey) as P1 JOIN dimColor ON dimColor.ColorKey = P1.ColorKey JOIN dimCarStatus ON dimCarStatus.CarStatusKey = P1.CarStatusKey JOIN dimCar ON dimCar.CarKey = P1.CarKey WHERE (@ExteriorColor is NULL OR dimColor.ExteriorColor like @ExteriorColor) AND (@InteriorColor is NULL OR dimColor.InteriorColor like @InteriorColor) AND (@Condition is NULL OR dimCarStatus.Condition like @Condition) AND (@Year is NULL OR dimCarStatus.[Year] like @Year) AND (@Certified is NULL OR dimCarStatus.Certified like @Certified) AND (@Make is NULL OR dimCar.Make like @Make) AND (@ModelCategory is NULL OR dimCar.ModelCategory like @ModelCategory) AND (@Model is NULL OR dimCar.Model like @Model) AND (@Trim is NULL OR dimCar.Trim like @Trim) AND (@BodyType is NULL OR dimCar.BodyType like @BodyType) AND (@VehicleTypeCode is NULL OR dimCar.VehicleTypeCode like @VehicleTypeCode) AND (@MinPrice is NULL OR P1.SellingPrice >= @MinPrice) AND (@MaxPrice is NULL OR P1.SellingPrice < @MaxPrice) AND (@Mileage is NULL OR P1.Mileage < @Mileage) ORDER BY CASE WHEN @SortOrder = 'Expensive' THEN -SellingPrice WHEN @SortOrder = 'Inexpensive' THEN SellingPrice WHEN @SortOrder = 'LowMiles' THEN Mileage WHEN @SortOrder = 'HighMiles' THEN -Mileage WHEN @SortOrder = 'Closest' THEN P1.Distance WHEN @SortOrder = 'Newest' THEN -[YEAR] WHEN @SortOrder = 'Oldest' THEN [YEAR] ELSE InventoryID END )

    Read the article

  • service broker message process order

    - by Blootac
    Everywhere I read says that messages handled by the service broker are processed in the order that they arrive, and yet if you create a table, message type, contract, service etc , and on activation have a stored proc that waits for 2 seconds and inserts the msg into a table, set the max queue readers to 5 or 10, and send 20 odd messages I can see in the table that they are inserted out of order even though when I insert them into the queue and look at the contents of the queue I can see that the messages are all in the right order. Is it due to the delay waitfor waiting for the nearest second and each thread having different subsecond times and then fighting for a lock or something? The reason i've got a delay in there is to simulate delays with joins etc Thanks demo code: --create the table and service broker CREATE TABLE test ( id int identity(1,1), contents varchar(100) ) CREATE MESSAGE TYPE test CREATE CONTRACT mycontract ( test sent by initiator ) GO CREATE PROCEDURE dostuff AS BEGIN DECLARE @msg varchar(100); RECEIVE TOP (1) @msg = message_body FROM myQueue IF @msg IS NOT NULL BEGIN WAITFOR DELAY '00:00:02' INSERT INTO test(contents)values(@msg) END END GO ALTER QUEUE myQueue WITH STATUS = ON, ACTIVATION ( STATUS = ON, PROCEDURE_NAME = dostuff, MAX_QUEUE_READERS = 10, EXECUTE AS SELF ) create service senderService on queue myQueue ( mycontract ) create service receiverService on queue myQueue ( mycontract ) GO --********************************************************** --now insert lots of messages to the queue DECLARE @dialog_handle uniqueidentifier BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>1</test>'); BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>2</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>3</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>4</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>5</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>6</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>7</test>')

    Read the article

  • Can I call make runtime decided method calls in Java?

    - by Catalin Marin
    I know there is an invoke function that does the stuff, I am overall interested in the "correctness" of using such a behavior. My issue is this: I have a Service Object witch contains methods which I consider services. What I want to do is alter the behavior of those services without later intrusion. For example: class MyService { public ServiceResponse ServeMeDonuts() { do stuff... return new ServiceResponse(); } after 2 months I find out that I need to offer the same service to a new client app and I also need to do certain extra stuff like setting a flag, or make or updating certain data, or encode the response differently. What I can do is pop it up and throw down some IFs. In my opinion this is not good as it means interaction with tested code and may result in un wanted behaviour for the previous service clients. So I come and add something to my registry telling the system that the "NewClient" has a different behavior. So I'll do something like this: public interface Behavior { public void preExecute(); public void postExecute(); } public class BehaviorOfMyService implements Behavior{ String method; String clientType; public void BehaviorOfMyService(String method,String clientType) { this.method = method; this.clientType = clientType; } public void preExecute() { Method preCall = this.getClass().getMethod("pre" + this.method + this.clientType); if(preCall != null) { return preCall.invoke(); } return false; } ...same for postExecute(); public void preServeMeDonutsNewClient() { do the stuff... } } when the system will do something like this if(registrySaysThereIs different behavior set for this ServiceObject) { Class toBeCalled = Class.forName("BehaviorOf" + usedServiceObjectName); Object instance = toBeCalled.getConstructor().newInstance(method,client); instance.preExecute(); ....call the service... instance.postExecute(); .... } I am not particularly interested in correctness of code as in correctness of thinking and approach. Actually I have to do this in PHP, witch I see as a kind of Pop music of programming which I have to "sing" for commercial reasons, even though I play POP I really want to sing by the book, so putting aside my more or less inspired analogy I really want to know your opinion on this matter for it's practical necessity and technical approach. Thanks

    Read the article

  • Need help iteratating over an array, retrieve two possibilites, no repeats, for Poker AI

    - by elguapo-85
    Can't really think of a good way to word this question, nor a good title, and maybe the answer is so ridiculously simple that I am missing it. I am working on a poker AI, and I want to calculate the number of hands that exist which are better then mine, I understand how to that, but want I can't figure out is the best way to iterate over a group of cards. So I am at the flop, I know what my two cards are and there are 3 cards on the board. So there are 47 unknown cards and I want to iterate over all possible combination of those 47 cards assuming that two are passed out, so you can't have two cards of the same rank and suit, and you if you have previously calculated a set you don't want to do it over again, because I will being wasting time, and this will be called many times. If you don't understand want I am asking please tell me and I will clarify more. So I can set something up like this, if that element equals one, it means it is not in my hand and not on the board, 4 for each suit, and 13 for each rank. setOfCards[4][13] If I do a simple set of for loops like this: (pseudocode) //remove cards I know are in play from setOfCards by setting values to zero for(int i = 0; i < 4; i++) for(int j = 0; j < 13; j++) for(int k = 0; k < 4; k++) for(int l = 0; l < 4; l++) //skip if values equal zero card1 = setOfCards[i][j] card2 = setOfCards[k][l] //now compare card1, card2 and set of board cards So this is actually going to repeat many values, for example: card1 = AceOfHearts, card2 = KingOfHearts is the same as card1 = KingOfHearts, card2 = AceOfHearts. It will also alter my calculations. How should I go about avoiding this? Also is there a name for this technique? Thank you.

    Read the article

  • jQuery: modify href attribute for first level list only

    - by bloggerious
    I'm a noob in jQuery and have stuck at this. I have the following HTML code output from a PHP page: <ul class="cats"> <li><span><a href="cant_post_link_yet1">Lifestyle</a></span></li> <li><span><a href="cant_post_link_yet2">Entertainment</a></span></li> <li class="has_child"> <span><a href="cant_post_link_yet3">Technology</a></span> <ul class="subcats"> <li><span><a href="cant_post_link_yet4">Gadgets</a></span></li> <li><span><a href="cant_post_link_yet5">Hardware</a></span></li> </ul> </li> <li><span><a href="cant_post_link_yetsports">Sports</a></span></li> <li class="has_child"> <span><a href="cant_post_link_yet6">Design</a></span> <ul class="subcats"> <li class="has_child"> <span><a href="cant_post_link_yet7">Web Design</a></span> <ul class="subcat"> <li><span><a href="cant_post_link_yet8">Adobe Photoshop</a></span></li> </ul> </li> <li><span><a href="cant_post_link_yet9">Graphics and Print</a></span></li> </ul> </li> What's the correct jQuery code so that I can modify the href attribute for the first-level list only? Basically, I want to change the href of Technology and Design to be "#" but will not change the href of Web Design which is already on second-level list. More Info: In the code above, if list has subcategories, then it has the class has_child, whether it's on first-level or not. So I want only the first-level list which has class has_child to be modified the href to "#" I can't alter output anymore because it's in the PHP code. Any help is greatly appreciated.

    Read the article

  • Postgres Stored procedure using iBatis

    - by Om Yadav
    --- The error occurred while applying a parameter map. --- Check the newSubs-InlineParameterMap. --- Check the statement (query failed). --- Cause: org.postgresql.util.PSQLException: ERROR: wrong record type supplied in RETURN NEXT Where: PL/pgSQL function "getnewsubs" line 34 at return next the function detail is as below.... CREATE OR REPLACE FUNCTION getnewsubs(timestamp without time zone, timestamp without time zone, integer) RETURNS SETOF record AS $BODY$declare v_fromdt alias for $1; v_todt alias for $2; v_domno alias for $3; v_cursor refcursor; v_rec record; v_cpno bigint; v_actno int; v_actname varchar(50); v_actid varchar(100); v_cpntypeid varchar(100); v_mrp double precision; v_domname varchar(100); v_usedt timestamp without time zone; v_expirydt timestamp without time zone; v_createdt timestamp without time zone; v_ctno int; v_phone varchar; begin open v_cursor for select cpno,c.actno,usedt from cpnusage c inner join account s on s.actno=c.actno where usedt = $1 and usedt < $2 and validdomstat(s.domno,v_domno) order by c.usedt; fetch v_cursor into v_cpno,v_actno,v_usedt; while found loop if isactivation(v_cpno,v_actno,v_usedt) IS TRUE then select into v_actno,v_actname,v_actid,v_cpntypeid,v_mrp,v_domname,v_ctno,v_cpno,v_usedt,v_expirydt,v_createdt,v_phone a.actno,a.actname as name,a.actid as actid,c.descr as cpntypeid,l.mrp as mrp,s.domname as domname,c.ctno as ctno,b.cpno,b.usedt,b.expirydt,d.createdt,a.phone from account a inner join cpnusage b on a.actno=b.actno inner join cpn d on b.cpno=d.cpno inner join cpntype c on d.ctno=c.ctno inner join ssgdom s on a.domno=s.domno left join price_class l ON l.price_class_id=b.price_class_id where validdomstat(a.domno,v_domno) and b.cpno=v_cpno and b.actno=v_actno; select into v_rec v_actno,v_actname,v_actid,v_cpntypeid,v_mrp,v_domname,v_ctno,v_cpno,v_usedt,v_expirydt,v_createdt,v_phone; return next v_rec; end if; fetch v_cursor into v_cpno,v_actno,v_usedt; end loop; return ; end;$BODY$ LANGUAGE 'plpgsql' VOLATILE; ALTER FUNCTION getnewsubs(timestamp without time zone, timestamp without time zone, integer) OWNER TO radius If i am running the function from the console it is running fine and giving me the correct response. But when using through java causing the above error. Can ay body help in it..Its very urgent. Please response as soon as possible. Thanks in advance.

    Read the article

  • What is a proper way to store site-level global variables in a SharePoint site?

    - by ccomet
    One thing that has driven me nuts about SharePoint2007 is the apparent inability to have defineable settings that apply specifically to a site or site collection itself, and not the content. I mean, you have some pre-defined settings like the Site Logo, the Site Name, and various other things, but there doesn't appear to be anywhere to add new kinds of settings. The application I am working on needs to be able to create multiple kinds of "project site collections" that all follow a basic template, but have certain additional settings that apply specifically to that site collection and that one alone. In addition to the standard site name we also need to define the Project Number, the Project Name, and the Client Name. And given the requests of some of our clients, we also reach a point where we have to have configurable settings that alter how some of the workflows work, like whether files are marked with Letters or Numbers. Our current solution, which I'm hesitant about, has been to store an XML file on the SharePoint server. This file contains one node for each site collection, identified by the URL of the root site. Inside the node are all of the elements that need to be defined for that site collection. When we need them, we have to access the XML file (which will always require SPSecurity.RunWithElevatedPrivileges to access files right on the server) every time to load it and retrieve the data. There are a lot of automated processes which will have to do this, and I'm hesitant about the stability of this method when we reach hundreds of sites with thousands of files running tens of thousands of workflows, all wanting to access this file. Maybe they're unfounded worries, but I'd rather worry than risk everything breaking in a couple years. I was looking into the SPWeb object and found the AllProperties hashtable. It looks like just the kind of thing which might work, but I don't know how safe it is to be modifying this. I read through both MSDN and the WSS SDK but found nothing that clarified on adding completely new properties into AllProperties. Is it safe to use AllProperties for this kind of thing? Or is there yet another feature that I am missing, which could handle the concept of global variables at the site collection or site scope?

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • Stored procedure error when use computed column for ID

    - by Hari
    I got the error: Procedure or function usp_User_Info3 has too many arguments specified When I run the program. I don't know the error in SP or in C# code. I have to display the Vendor_ID after the user clicks the submit button. Where the thing going wrong here ? Table structure : CREATE TABLE User_Info3 ( SNo int Identity (2000,1) , Vendor_ID AS 'VEN' + CAST(SNo as varchar(16)) PERSISTED PRIMARY KEY, UserName VARCHAR(16) NOT NULL, User_Password VARCHAR(12) NOT NULL, User_ConPassword VARCHAR(12) NOT NULL, User_FirstName VARCHAR(25) NOT NULL, User_LastName VARCHAR(25) SPARSE NULL, User_Title VARCHAR(35) NOT NULL, User_EMail VARCHAR(35) NOT NULL, User_PhoneNo VARCHAR(14) NOT NULL, User_MobileNo VARCHAR(14)NOT NULL, User_FaxNo VARCHAR(14)NOT NULL, UserReg_Date DATE DEFAULT GETDATE() ) Stored Procedure : ALTER PROCEDURE [dbo].[usp_User_Info3] @SNo INT OUTPUT, @Vendor_ID VARCHAR(10) OUTPUT, @UserName VARCHAR(30), @User_Password VARCHAR(12), @User_ConPassword VARCHAR(12), @User_FirstName VARCHAR(25), @User_LastName VARCHAR(25), @User_Title VARCHAR(35), @User_OtherEmail VARCHAR(30), @User_PhoneNo VARCHAR(14), @User_MobileNo VARCHAR(14), @User_FaxNo VARCHAR(14) AS BEGIN SET NOCOUNT ON; INSERT INTO User_Info3 (UserName,User_Password,User_ConPassword,User_FirstName, User_LastName,User_Title,User_OtherEmail,User_PhoneNo,User_MobileNo,User_FaxNo) VALUES (@UserName,@User_Password,@User_ConPassword,@User_FirstName,@User_LastName, @User_Title,@User_OtherEmail,@User_PhoneNo,@User_MobileNo,@User_FaxNo) SET @SNo = Scope_Identity() SELECT Vendor_ID From User_Info3 WHERE SNo = @SNo END C# Code : protected void BtnUserNext_Click(object sender, EventArgs e) { cmd.CommandType = CommandType.StoredProcedure; cmd.CommandText = "usp_User_Info3"; System.Data.SqlClient.SqlParameter SNo=cmd.Parameters.Add("@SNo",System.Data.SqlDbType.Int); System.Data.SqlClient.SqlParameter Vendor_ID=cmd.Parameters.Add("@Vendor_ID", System.Data.SqlDbType.VarChar,10); cmd.Parameters.Add("@UserName", SqlDbType.VarChar).Value = txtUserName.Text; cmd.Parameters.Add("@User_Password", SqlDbType.VarChar).Value = txtRegPassword.Text; cmd.Parameters.Add("@User_ConPassword", SqlDbType.VarChar).Value = txtRegConPassword.Text; cmd.Parameters.Add("@User_FirstName", SqlDbType.VarChar).Value = txtRegFName.Text; cmd.Parameters.Add("@User_LastName", SqlDbType.VarChar).Value = txtRegLName.Text; cmd.Parameters.Add("@User_Title", SqlDbType.VarChar).Value = txtRegTitle.Text; cmd.Parameters.Add("@User_OtherEmail", SqlDbType.VarChar).Value = txtOtherEmail.Text; cmd.Parameters.Add("@User_PhoneNo", SqlDbType.VarChar).Value =txtRegTelephone.Text; cmd.Parameters.Add("@User_MobileNo", SqlDbType.VarChar).Value =txtRegMobile.Text; cmd.Parameters.Add("@User_FaxNo", SqlDbType.VarChar).Value =txtRegFax.Text; cmd.Connection = SqlCon; try { Vendor_ID.Direction = System.Data.ParameterDirection.Output; SqlCon.Open(); cmd.ExecuteNonQuery(); string VendorID = cmd.ExecuteScalar() as string; } catch (Exception ex) { throw new Exception(ex.Message); } finally { string url = "../CompanyBasicInfo.aspx?Parameter=" + Server.UrlEncode(" + VendorID + "); SqlCon.Close(); } }

    Read the article

  • Django: Grouping by Dates and Servers

    - by TheLizardKing
    So I am trying to emulate google app's status page: http://www.google.com/appsstatus#hl=en but for backups for our own servers. Instead of service names on the left it'll be server names but the dates and hopefully the pagination will be there too. My models look incredibly similar to this: from django.db import models STATUS_CHOICES = ( ('UN', 'Unknown'), ('NI', 'No Issue'), ('IS', 'Issue'), ('NR', 'Not Running'), ) class Server(models.Model): name = models.CharField(max_length=32) def __unicode__(self): return self.name class Backup(models.Model): server = models.ForeignKey(Server) created = models.DateField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) status = models.CharField(max_length=2, choices=STATUS_CHOICES, default='UN') issue = models.TextField(blank=True) def __unicode__(self): return u'%s: %s' % (self.server, self.get_status_display()) My issue is that I am having a hell of a time displaying the information I need. Everyday a little after midnight a cron job will run and add a row for each server for that day, defaulting on status unknown (UN). My backups.html: {% extends "base.html" %} {% block content %} <table> <tr> <th>Name</th> {% for server in servers %} <th>{{ created }}</th> </tr> <tr> <td>{{ server.name }}</td> {% for backup in server.backup_set.all %} <td>{{ backup.get_status_display }}</td> {% endfor %} </tr> {% endfor %} </table> {% endblock content %} This actually works but I do not know how to get the dates to show. Obviously {{ created }} doesn't do anything but the servers don't have create dates. Backups do and because it's a cron job there should only be X number of rows with any particular date (depending on how many servers we are following for that day). Summary I want to create a table, X being server names, Y being dates starting at today while all the cells being the status of a backup. The above model and template should hopefully give you an idea what my thought process but I am willing to alter anything. Basically I am create a fancy excel spreadsheet.

    Read the article

  • Stored procedure optimization

    - by George Zacharia
    Hi, i have a stored procedure which takes lot of time to execure .Can any one suggest a better approch so that the same result set is achived. ALTER PROCEDURE [dbo].[spFavoriteRecipesGET] @USERID INT, @PAGENUMBER INT, @PAGESIZE INT, @SORTDIRECTION VARCHAR(4), @SORTORDER VARCHAR(4),@FILTERBY INT AS BEGIN DECLARE @ROW_START INT DECLARE @ROW_END INT SET @ROW_START = (@PageNumber-1)* @PageSize+1 SET @ROW_END = @PageNumber*@PageSize DECLARE @RecipeCount INT DECLARE @RESULT_SET_TABLE TABLE ( Id INT NOT NULL IDENTITY(1,1), FavoriteRecipeId INT, RecipeId INT, DateAdded DATETIME, Title NVARCHAR(255), UrlFriendlyTitle NVARCHAR(250), [Description] NVARCHAR(MAX), AverageRatingId FLOAT, SubmittedById INT, SubmittedBy VARCHAR(250), RecipeStateId INT, RecipeRatingId INT, ReviewCount INT, TweaksCount INT, PhotoCount INT, ImageName NVARCHAR(50) ) INSERT INTO @RESULT_SET_TABLE SELECT FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, COALESCE(users.DisplayName,users.UserName,Recipes.SubmittedBy) As SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, COUNT(RecipeReviews.Review), COUNT(RecipeTweaks.Tweak), COUNT(Photos.PhotoId), dbo.udfGetRecipePhoto(Recipes.RecipeId) AS ImageName FROM FavoriteRecipes INNER JOIN Recipes ON FavoriteRecipes.RecipeId=Recipes.RecipeId AND Recipes.RecipeStateId <> 3 LEFT OUTER JOIN RecipeReviews ON RecipeReviews.RecipeId=Recipes.RecipeId AND RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeRatingId= ( SELECT MAX(RecipeReviews.RecipeRatingId) FROM RecipeReviews WHERE RecipeReviews.ReviewedById=@UserId AND RecipeReviews.RecipeId=FavoriteRecipes.RecipeId ) OR RecipeReviews.RecipeRatingId IS NULL LEFT OUTER JOIN RecipeTweaks ON RecipeTweaks.RecipeId = Recipes.RecipeId AND RecipeTweaks.TweakedById= @UserId LEFT OUTER JOIN Photos ON Photos.RecipeId = Recipes.RecipeId AND Photos.UploadedById = @UserId AND Photos.RecipeId = FavoriteRecipes.RecipeId AND Photos.PhotoTypeId = 1 LEFT OUTER JOIN users ON Recipes.SubmittedById = users.UserId WHERE FavoriteRecipes.UserId=@UserId GROUP BY FavoriteRecipes.FavoriteRecipeId, Recipes.RecipeId, FavoriteRecipes.DateAdded, Recipes.Title, Recipes.UrlFriendlyTitle, Recipes.[Description], Recipes.AverageRatingId, Recipes.SubmittedById, Recipes.SubmittedBy, Recipes.RecipeStateId, RecipeReviews.RecipeRatingId, users.DisplayName, users.UserName, Recipes.SubmittedBy; WITH SortResults AS ( SELECT ROW_NUMBER() OVER ( ORDER BY CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='a' THEN TITLE END ASC, CASE WHEN @SORTDIRECTION = 't' AND @SORTORDER='d' THEN TITLE END DESC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='a' THEN AverageRatingId END ASC, CASE WHEN @SORTDIRECTION = 'r' AND @SORTORDER='d' THEN AverageRatingId END DESC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='a' THEN RecipeRatingId END ASC, CASE WHEN @SORTDIRECTION = 'mr' AND @SORTORDER='d' THEN RecipeRatingId END DESC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='a' THEN DateAdded END ASC, CASE WHEN @SORTDIRECTION = 'd' AND @SORTORDER='d' THEN DateAdded END DESC ) RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR ( @FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR ( @FILTERBY <> 1 AND @FILTERBY <> 2)) ) SELECT RowNumber, FavoriteRecipeId, RecipeId, DateAdded, Title, UrlFriendlyTitle, [Description], AverageRatingId, SubmittedById, SubmittedBy, RecipeStateId, RecipeRatingId, ReviewCount, TweaksCount, PhotoCount, ImageName FROM SortResults WHERE RowNumber BETWEEN @ROW_START AND @ROW_END print @ROW_START print @ROW_END SELECT @RecipeCount=dbo.udfGetFavRecipesCount(@UserId) SELECT @RecipeCount AS RecipeCount SELECT COUNT(Id) as FilterCount FROM @RESULT_SET_TABLE WHERE ((@FILTERBY = 1 AND SubmittedById= @USERID) OR (@FILTERBY = 2 AND (SubmittedById <> @USERID OR SubmittedById IS NULL)) OR (@FILTERBY <> 1 AND @FILTERBY <> 2)) END

    Read the article

  • Java replacement for C macros

    - by thkala
    Recently I refactored the code of a 3rd party hash function from C++ to C. The process was relatively painless, with only a few changes of note. Now I want to write the same function in Java and I came upon a slight issue. In the C/C++ code there is a C preprocessor macro that takes a few integer variables names as arguments and performs a bunch of bitwise operations with their contents and a few constants. That macro is used in several different places, therefore its presence avoids a fair bit of code duplication. In Java, however, there is no equivalent for the C preprocessor. There is also no way to affect any basic type passed as an argument to a method - even autoboxing produces immutable objects. Coupled with the fact that Java methods return a single value, I can't seem to find a simple way to rewrite the macro. Avenues that I considered: Expand the macro by hand everywhere: It would work, but the code duplication could make things interesting in the long run. Write a method that returns an array: This would also work, but it would repeatedly result into code like this: long tmp[] = bitops(k, l, m, x, y, z); k = tmp[0]; l = tmp[1]; m = tmp[2]; x = tmp[3]; y = tmp[4]; z = tmp[5]; Write a method that takes an array as an argument: This would mean that all variable names would be reduced to array element references - it would be rather hard to keep track of which index corresponds to which variable. Create a separate class e.g. State with public fields of the appropriate type and use that as an argument to a method: This is my current solution. It allows the method to alter the variables, while still keeping their names. It has the disadvantage, however, that the State class will get more and more complex, as more macros and variables are added, in order to avoid copying values back and forth among different State objects. How would you rewrite such a C macro in Java? Is there a more appropriate way to deal with this, using the facilities provided by the standard Java 6 Development Kit (i.e. without 3rd party libraries or a separate preprocessor)?

    Read the article

  • how to wait multiple function processing to finish

    - by user351412
    I have a problem about multiple function processing , listed as below code, the main function is btnEvalClick, I have try to use alter native 1and 2 to wait the function not move to next record before theprocessed function finish, but it does not work //private function btnEvalClick(event:Event):void { // var i:int; // for(i= 0; i < (dataArr1.length); i++) { // dispatchEvent( new FlexEvent('test') ); // callfunc1('cydatGMX'); //call function 1 // callfun2('cydatGMO'); //call function 1 // editSave(); //save record (HTTP) //## Alternative 1 //if (String(event) == 'SAVEOK') { // RecMov('next'); //move record if save = OK //} //## Alternative 2 //while (waitfc == '') // if waitfc not 'OK' continue looping //{ // z = z + 1; //} // RecMov('next'); //Move to next record to process //} //private function callfunc1(tasal:String):void { // var mySO :SharedObject; // var myDP: Array; // var i:int; // var prm:Array; // try // { // mySO = SharedObject.getLocal(tasal,'/'); // prm = mySO.data.txt.split('?'); // for(i=0; i < (prm.length - 1); i++) { // myDP = prm[i].toString().split('^'); // if ( myDP[0].toString() == String(dataArr1[dg].MatrixCDCol)){ // myDPX = myDP; // break; // } // } // } // catch (err:Error) { // Alert.show('Limit object creation fail (' + tasal + '), please retry ); // } //} //private function editSave():void //{ // var parameters:* = // { // 'CertID': CertIDCol.text, 'AssetID': AssetIDCol.text, 'CertDate': cdt, //'Ccatat': CcatatCol.text, 'CertBy': CertByCol.text, 'StatusID': StatusIDCol.text, //'UpdDate': lele, 'UpdUsr': ApplicationState.instance.luNm }; // doRequest('Update', parameters, saveItemHandler); //} //private function doRequest(method_name:String, parameters:Object, callback:Function):void // { // add the method to the parameters list // parameters['method'] = (method_name + 'ASC'); // gateway.request = parameters; // var call:AsyncToken = gateway.send(); // call.request_params = gateway.request; // call.handler = callback; // } //private function saveItemHandler(e:Object):void // { // if (e.isError) // { // Alert.show('Error: ' + e.data.error); // } // else // { // Alert.show('Record Saved..'); // waitfc = 'OK'; // dispatchEvent( new FlexEvent('SAVEOK') ); // } // }

    Read the article

  • how to deal with the position in a c# stream

    - by CapsicumDreams
    The (entire) documentation for the position property on a stream says: When overridden in a derived class, gets or sets the position within the current stream. The Position property does not keep track of the number of bytes from the stream that have been consumed, skipped, or both. That's it. OK, so we're fairly clear on what it doesn't tell us, but I'd really like to know what it in fact does stand for. What is 'the position' for? Why would we want to alter or read it? If we change it - what happens? In a pratical example, I have a a stream that periodically gets written to, and I have a thread that attempts to read from it (ideally ASAP). From reading many SO issues, I reset the position field to zero to start my reading. Once this is done: Does this affect where the writer to this stream is going to attempt to put the data? Do I need to keep track of the last write position myself? (ie if I set the position to zero to read, does the writer begin to overwrite everything from the first byte?) If so, do I need a semaphore/lock around this 'position' field (subclassing, perhaps?) due to my two threads accessing it? If I don't handle this property, does the writer just overflow the buffer? Perhaps I don't understand the Stream itself - I'm regarding it as a FIFO pipe: shove data in at one end, and suck it out at the other. If it's not like this, then do I have to keep copying the data past my last read (ie from position 0x84 on) back to the start of my buffer? I've seriously tried to research all of this for quite some time - but I'm new to .NET. Perhaps the Streams have a long, proud (undocumented) history that everyone else implicitly understands. But for a newcomer, it's like reading the manual to your car, and finding out: The accelerator pedal affects the volume of fuel and air sent to the fuel injectors. It does not affect the volume of the entertainment system, or the air pressure in any of the tires, if fitted. Technically true, but seriously, what we want to know is that if we mash it to the floor you go faster..

    Read the article

  • To Interface or Not?: Creating a polymorphic model relationship in Ruby on Rails dynamically..

    - by Globalkeith
    Please bear with me for a moment as I try to explain exactly what I would like to achieve. In my Ruby on Rails application I have a model called Page. It represents a web page. I would like to enable the user to arbitrarily attach components to the page. Some examples of "components" would be Picture, PictureCollection, Video, VideoCollection, Background, Audio, Form, Comments. Currently I have a direct relationship between Page and Picture like this: class Page < ActiveRecord::Base has_many :pictures, :as => :imageable, :dependent => :destroy end class Picture < ActiveRecord::Base belongs_to :imageable, :polymorphic => true end This relationship enables the user to associate an arbitrary number of Pictures to the page. Now if I want to provide multiple collections i would need an additional model: class PictureCollection < ActiveRecord::Base belongs_to :collectionable, :polymorphic => true has_many :pictures, :as => :imageable, :dependent => :destroy end And alter Page to reference the new model: class Page < ActiveRecord::Base has_many :picture_collections, :as => :collectionable, :dependent => :destroy end Now it would be possible for the user to add any number of image collections to the page. However this is still very static in term of the :picture_collections reference in the Page model. If I add another "component", for example :video_collections, I would need to declare another reference in page for that component type. So my question is this: Do I need to add a new reference for each component type, or is there some other way? In Actionscript/Java I would declare an interface Component and make all components implement that interface, then I could just have a single attribute :components which contains all of the dynamically associated model objects. This is Rails, and I'm sure there is a great way to achieve this, but its a tricky one to Google. Perhaps you good people have some wise suggestions. Thanks in advance for taking the time to read and answer this.

    Read the article

  • Using Selenium-IDE with a rich Javascript application?

    - by Darien
    Problem At my workplace, we're trying to find the best way to create automated-tests for an almost wholly javascript-driven intranet application. Right now we're stuck trying to find a good tradeoff between: Application code in reusable and nest-able GUI components. Tests which are easily created by the testing team Tests which can be recorded once and then automated Tests which do not break after small cosmetic changes to the site XPath expressions (or other possible expressions, like jQuery selectors) naively generated from Selenium-IDE are often non-repeatable and very fragile. Conversely, having the JS code generate special unique ID values for every important DOM-element on the page... well, that is its own headache, complicated by re-usable GUI components and IDs needing to be consistent when the test is re-run. What successes have other people had with this kind of thing? How do you do automated application-level testing of a rich JS interface? Limitations We are using JavascriptMVC 2.0, hopefully 3.0 soon so that we can upgrade to jQuery 1.4.x. The test-making folks are mostly trained to use Selenium IDE to directly record things. The test leads would prefer a page-unique HTML ID on each clickable element on the page... Training the testers to write or alter special expressions (such as telling them which HTML class-names are important branching points) is a no-go. We try to make re-usable javascript components, but this means very few GUI components can treat themselves (or what they contain) as unique. Some of our components already use HTML ID values in their operation. I'd like to avoid doing this anyway, but it complicates the idea of ID-based testing. It may be possible to add custom facilities (like a locator-builder or new locator method) to the Selenium-IDE installation testers use. Almost everything that goes on occurs within a single "page load" from a conventional browser perspective, even when items are saved Current thoughts I'm considering a system where a custom locator-builder (javascript code) for Selenium-IDE will talk with our application code as the tester is recording. In this way, our application becomes partially responsible for generating a mostly-flexible expression (XPath or jQuery) for any given DOM element. While this can avoid requiring more training for testers, I worry it may be over-thinking things.

    Read the article

  • Lithium: Run filter after find() to format output

    - by Housni
    I wanted to specify the output of a field from within my model so I added a date key to my $_schema: models/Tags.php <?php protected $_schema = array( 'id' => array('type' => 'integer', 'key' => 'primary'), 'title' => array('type' => 'string'), 'created' => array('type' => 'integer', 'date' => 'F jS, Y - g:i a'), 'modified' => array('type' => 'integer') ); ?> I store my time as an unsigned integer in the db (output of time()). I want my base model to format any field that has the date key for output. I thought the best place to do that would be right after a find: extensions/data/Model.php <?php static::applyFilter('find', function($self, $params, $chain) { $schema = $self::schema(); $entity = $chain->next($self, $params, $chain); foreach ($schema as $field => $options) { if (array_key_exists('date', $options)) { //format as a date $params['data'][$field] = $entity->formatDate($field, $options['date']); } } return $entity; }); public function formatdate($entity, $field, $format, $timezone = 'Asia/Colombo') { $dt = new \DateTime(); $tz = new \DateTimeZone($timezone); $dt->setTimestamp($entity->$field); $dt->setTimezone($tz); return $dt->format($format); } ?> This doesn't seem to be working. When I execute a find all, this filter seems to get hit twice. The first time, $entity contains a count() of the results and only on the second hit does it contain the Records object. What am I doing wrong? How do I alter this so that simply doing <?= $tag->created; ?> in my view will format the date the way I want? This, essentially, needs to be an 'after filter', of sorts. EDIT If I can find a way to access the current model entity object (not the full namespaced path, $self contains that), I can probably solve my problem.

    Read the article

  • SQL version control methodology

    - by Tom H.
    There are several questions on SO about version control for SQL and lots of resources on the web, but I can't find something that quite covers what I'm trying to do. First off, I'm talking about a methodology here. I'm familiar with the various source control applications out there and I'm familiar with tools like Red Gate's SQL Compare, etc. and I know how to write an application to check things in and out of my source control system automatically. If there is a tool which would be particularly helpful in providing a whole new methodology or which have a useful and uncommon functionality then great, but for the tasks mentioned above I'm already set. The requirements that I'm trying to meet are: The database schema and look-up table data are versioned DML scripts for data fixes to larger tables are versioned A server can be promoted from version N to version N + X where X may not always be 1 Code isn't duplicated within the version control system - for example, if I add a column to a table I don't want to have to make sure that the change is in both a create script and an alter script The system needs to support multiple clients who are at various versions for the application (trying to get them all up to within 1 or 2 releases, but not there yet) Some organizations keep incremental change scripts in their version control and to get from version N to N + 3 you would have to run scripts for N-N+1 then N+1-N+2 then N+2-N+3. Some of these scripts can be repetitive (for example, a column is added but then later it is altered to change the data type). We're trying to avoid that repetitiveness since some of the client DBs can be very large, so these changes might take longer than necessary. Some organizations will simply keep a full database build script at each version level then use a tool like SQL Compare to bring a database up to one of those versions. The problem here is that intermixing DML scripts can be a problem. Imagine a scenario where I add a column, use a DML script to fill said column, then in a later version that column name is changed. Perhaps there is some hybrid solution? Maybe I'm just asking for too much? Any ideas or suggestions would be greatly appreciated though. If the moderators think that this would be more appropriate as a community wiki, please let me know. Thanks!

    Read the article

  • Disregard a particular TD element

    - by RussellDias
    Below I have the code that allows me to edit a table row inline. However it edits ALL of the TDs within that row. My problem, along with the code, are stated below. Any help is appreciated. <tbody> <tr> <th scope="row">Test</th> <td class="amount">$124</td> <td class="amount" id="" >$154</td> <td class="diff">- 754</td> </tr> </tbody> The above table is just a sample. What I have been trying to accomplish is, to simply edit the TDs within that particular row, but I need it to disregard the diff TD. I'm fairly new to jQuery and have got the following code via the help of a jQuery book. $(document).ready(function() { TABLE.formwork('#current-expenses'); }); var TABLE = {}; TABLE.formwork = function(table){ var $tables = $(table); $tables.each(function () { var _table = $(this); _table.find('thead tr').append($('<th class="edit">&nbsp;</th>')); _table.find('tbody tr').append($('<td class="edit"><input type="button" value="Edit"/></td>')) }); $tables.find('.edit :button').live('click', function(e) { TABLE.editable(this); e.preventDefault(); }); } TABLE.editable = function(button) { var $button = $(button); var $row = $button.parents('tbody tr'); var $cells = $row.children('td').not('.edit'); if($row.data('flag')) { // in edit mode, move back to table // cell methods $cells.each(function () { var _cell = $(this); _cell.html(_cell.find('input').val()); }) $row.data('flag',false); $button.val('Edit'); } else { // in table mode, move to edit mode // cell methods $cells.each(function() { var _cell = $(this); _cell.data('text', _cell.html()).html(''); if($('td.diff')){ var $input = $('<input type="text" />') .val(_cell.data('text')) .width(_cell.width() - 16); _cell.append($input); } }) $row.data('flag', true); $button.val('Save'); } } I have attempted to alter the code so that it would disregard the diff class TD, but have had no luck so far.

    Read the article

  • Database users in the Oracle Utilities Application Framework

    - by Anthony Shorten
    I mentioned the product database users fleetingly in the last blog post and they deserve a better mention. This applies to all versions of the Oracle Utilities Application Framework. The Oracle Utilities Application Framework uses up to three users initially as part of the base operations of the product. The type of database supported (the framework supports Oracle, IBM DB2 and Microsoft SQL Server) dictates the number of users used and their permissions. For publishing brevity I will outline what is available for the Oracle database and, in summary, mention where it differs for the other database supported. For Oracle database customers we ship three distinct database users: Administration User (SPLADM or CISADM by default) - This is the database user that actually owns the schema. This user is not used by the product to do any DML (Data Manipulation Language) SQL other than that is necessary for maintenance of the database. This database user performs all the DCL (Data Control Language) and DDL (Data Definition Language) against the database. It is typically reserved for Database Administration use only. Product Read Write User (SPLUSER or CISUSER by default) - This is the database user used by the product itself to execute DML (Data Manipulation Language) statements against the schema owned by the Administration user. This user has the appropriate read and write permission to objects within the schema owned by the Administration user. For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. Product Read User (SPLREAD or CISREAD by default) - This is the database that has read only permission to the schema owned by the Administration user. It is used for reporting or any part of the product or interface that requires read permissions to the database (for example, products that have ConfigLab and Archiving use this user for remote access). For databases such as DB2 and SQL Server we may not create this user but use other DCL (Data Control Language) statements and facilities to simulate this user. You may notice the words by default in the list above. The values supplied with the installer are the default and can be changed to what the site standard or implementation wants to use (as long as they conform to the standards supported by the underlying database). You can even create multiples of each within the same database and pointing to same schema. To manage the permissions for the users, there is a utility provided with the installation (oragensec (Oracle), db2gensec (DB2) or msqlgensec (SQL Server)) that generates the security definitions for the above users. That can be executed a number of times for each schema to give users appropriate permissions. For example, it is possible to define more than one read/write User to access the database. This is a common technique used by implementations to have a different user per access mode (to separate online and batch). In fact you can also allocate additional security (such as resource profiles in Oracle) to limit the impact of specific users at the database. To facilitate users and permissions, in Oracle for example, we create a CISREAD role (read only role) and a CISUSER role (read write role) that can be allocated to the appropriate database user. When the security permissions utility, oragensec in this case, is executed it uses the role to determine the permissions. To give you a case study, my underpowered laptop has multiple installations on it of multiple products but I have one database. I create a different schema for each product and each version (with my own naming convention to help me manage the databases). I create individual users on each schema and run oragensec to maintain the permissions for each appropriately. It works fine as long I have setup the userids appropriately. This means: Creating the users with the appropriate roles. I use the common CISUSER and CISREAD role across versions and across Oracle Utilities Application Framework products. Just remember to associate the CISUSER role with the database user you want to use for read/write operations and the CISREAD role with the user you wish to use for the read only operations. The role is treated as a tag to indicate the oragensec utility which appropriate permissions to assign to the user. The utilities for the other database types essentially do the same, obviously using the technology available within those databases. Run oragensec against the read write user and read only user against the appropriate administration user (I will abbreviate the user to ADM user). This ensures the right permissions are allocated to the right users for the right products. To help me there, I use the same prefix on the user name for the same product. For example, my Oracle Utilities Application Framework V4 environment has the administration user set to FW4ADM and the associated FW4USER and FW4READ as the users for the product to use. For my MWM environment I used MWMADM for the administration user and MWMUSER and MWMREAD for my associated users. You get the picture. When I run oragensec (once for each ADM user), I know what other users to associate with it. Remember to rerun oragensec against the users if I run upgrades, service packs or database based single fixes. This assures that the users are in synchronization with the ADM user. As a side note, for those who do not understand the difference between DML, DCL and DDL: DDL (Data Definition Language) - These are SQL statements that define the database schema and the structures within. SQL Statements such as CREATE and DROP are examples of DDL SQL statements. DCL (Data Control Language) - These are the SQL statements that define the database level permissions to DDL maintained objects within the database. SQL Statements such as GRANT and REVOKE are examples of DCL SQL statements. DML (Database Manipulation Language) - These are SQL statements that alter the data within the tables. SQL Statements such as SELECT, INSERT, UPDATE and DELETE are examples of DML SQL statements. Hope this has clarified the database user support. Remember in Oracle Utilities Application Framework V4 we enhanced this by also supporting CLIENT_IDENTIFIER to allow the database to still use the administration user for the main processing but make the database session more traceable.

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How can I get penetration depth from Minkowski Portal Refinement / Xenocollide?

    - by Raven Dreamer
    I recently got an implementation of Minkowski Portal Refinement (MPR) successfully detecting collision. Even better, my implementation returns a good estimate (local minimum) direction for the minimum penetration depth. So I took a stab at adjusting the algorithm to return the penetration depth in an arbitrary direction, and was modestly successful - my altered method works splendidly for face-edge collision resolution! What it doesn't currently do, is correctly provide the minimum penetration depth for edge-edge scenarios, such as the case on the right: What I perceive to be happening, is that my current method returns the minimum penetration depth to the nearest vertex - which works fine when the collision is actually occurring on the plane of that vertex, but not when the collision happens along an edge. Is there a way I can alter my method to return the penetration depth to the point of collision, rather than the nearest vertex? Here's the method that's supposed to return the minimum penetration distance along a specific direction: public static Vector3 CalcMinDistance(List<Vector3> shape1, List<Vector3> shape2, Vector3 dir) { //holding variables Vector3 n = Vector3.zero; Vector3 swap = Vector3.zero; // v0 = center of Minkowski sum v0 = Vector3.zero; // Avoid case where centers overlap -- any direction is fine in this case //if (v0 == Vector3.zero) return Vector3.zero; //always pass in a valid direction. // v1 = support in direction of origin n = -dir; //get the differnce of the minkowski sum Vector3 v11 = GetSupport(shape1, -n); Vector3 v12 = GetSupport(shape2, n); v1 = v12 - v11; //if the support point is not in the direction of the origin if (v1.Dot(n) <= 0) { //Debug.Log("Could find no points this direction"); return Vector3.zero; } // v2 - support perpendicular to v1,v0 n = v1.Cross(v0); if (n == Vector3.zero) { //v1 and v0 are parallel, which means //the direction leads directly to an endpoint n = v1 - v0; //shortest distance is just n //Debug.Log("2 point return"); return n; } //get the new support point Vector3 v21 = GetSupport(shape1, -n); Vector3 v22 = GetSupport(shape2, n); v2 = v22 - v21; if (v2.Dot(n) <= 0) { //can't reach the origin in this direction, ergo, no collision //Debug.Log("Could not reach edge?"); return Vector2.zero; } // Determine whether origin is on + or - side of plane (v1,v0,v2) //tests linesegments v0v1 and v0v2 n = (v1 - v0).Cross(v2 - v0); float dist = n.Dot(v0); // If the origin is on the - side of the plane, reverse the direction of the plane if (dist > 0) { //swap the winding order of v1 and v2 swap = v1; v1 = v2; v2 = swap; //swap the winding order of v11 and v12 swap = v12; v12 = v11; v11 = swap; //swap the winding order of v11 and v12 swap = v22; v22 = v21; v21 = swap; //and swap the plane normal n = -n; } /// // Phase One: Identify a portal while (true) { // Obtain the support point in a direction perpendicular to the existing plane // Note: This point is guaranteed to lie off the plane Vector3 v31 = GetSupport(shape1, -n); Vector3 v32 = GetSupport(shape2, n); v3 = v32 - v31; if (v3.Dot(n) <= 0) { //can't enclose the origin within our tetrahedron //Debug.Log("Could not reach edge after portal?"); return Vector3.zero; } // If origin is outside (v1,v0,v3), then eliminate v2 and loop if (v1.Cross(v3).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v2 = v3; v21 = v31; v22 = v32; n = (v1 - v0).Cross(v3 - v0); continue; } // If origin is outside (v3,v0,v2), then eliminate v1 and loop if (v3.Cross(v2).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v1 = v3; v11 = v31; v12 = v32; n = (v3 - v0).Cross(v2 - v0); continue; } bool hit = false; /// // Phase Two: Refine the portal int phase2 = 0; // We are now inside of a wedge... while (phase2 < 20) { phase2++; // Compute normal of the wedge face n = (v2 - v1).Cross(v3 - v1); n.Normalize(); // Compute distance from origin to wedge face float d = n.Dot(v1); // If the origin is inside the wedge, we have a hit if (d > 0 ) { //Debug.Log("Do plane test here"); float T = n.Dot(v2) / n.Dot(dir); Vector3 pointInPlane = (dir * T); return pointInPlane; } // Find the support point in the direction of the wedge face Vector3 v41 = GetSupport(shape1, -n); Vector3 v42 = GetSupport(shape2, n); v4 = v42 - v41; float delta = (v4 - v3).Dot(n); float separation = -(v4.Dot(n)); if (delta <= kCollideEpsilon || separation >= 0) { //Debug.Log("Non-convergance detected"); //Debug.Log("Do plane test here"); return Vector3.zero; } // Compute the tetrahedron dividing face (v4,v0,v1) float d1 = v4.Cross(v1).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v2) float d2 = v4.Cross(v2).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v3) float d3 = v4.Cross(v3).Dot(v0); if (d1 < 0) { if (d2 < 0) { // Inside d1 & inside d2 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } else { // Inside d1 & outside d2 ==> eliminate v3 v3 = v4; v31 = v41; v32 = v42; } } else { if (d3 < 0) { // Outside d1 & inside d3 ==> eliminate v2 v2 = v4; v21 = v41; v22 = v42; } else { // Outside d1 & outside d3 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } } } return Vector3.zero; } }

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124  | Next Page >