Search Results

Search found 12215 results on 489 pages for 'identity column'.

Page 122/489 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Upcoming Directory Services Live Webcast - Improve Time-to-Market and Reduce Cost with Oracle Direct

    - by mark.wilcox
    We're doing another live webcast on May 27 - Here's the details: Live Webcast: Improve Time-to-Market and Reduce Cost with Oracle Directory Services Event Date: Thursday, May 27, 2010 Event Time: 10:00 AM Pacific Standard Time / 1:00 Eastern Standard Time Organizations can spend up to 60% of their IT budgets on operational activities. • Are you being asked to do more, with less resources? • Have you had to lead a cost cutting exercise in your IT department? • Do you have licenses for software and wonder whether you are getting the most out of those resources? • Do you want to be an Identity Hero inside your organization? Oracle brings leadership in Directory Services to help organization's identify ways to leverage Oracle Virtual Directory to reduce costs in their enterprise. This presentation will explore ways to use Oracle Virtual Directory to federate faster, create architectures to meet aggressive time constraints for identity projects or mergers and acquisitions in a cost conscious environment. -- Posted via email from Virtual Identity Dialogue

    Read the article

  • Big level objects collision system for 2d game

    - by Aristarhys
    I read many variants today and get some knowledge in general, so here is a steps of mine thoughts in pictures (horrible paint.net ones). We need to develop grid system, so we check only thing near, perform simple check to cut out deep check, and at - last deep check like per-pixel collision check. Step 1 - Let p1, p2 are some sprites lets first just check with circle collision - because large distance between p1, p2 this fails and of course so we don't need test more deeply. But if we have not 2, but 20 objects, why we need to even circle test something so far outside of our view. Step 2 - Add basic column system, now we don't bother with p2 if it's in a column far from p1 column, so we even don't do circle test. But p3 is in the same col, so let do circle test, which of course will fail. Step 3 - Lets improve column system to the grid system with grid cell size just like p1, p2, p3 collision boxes, so we cut out things much top or below p1. And this is all great until comes BIG OBJs which is some kind of platforms. They are much bigger then grid cell. Circle test for will be successful, but deep check for whole big obj will fail And that the part I can't get. How do I store the grid position of big object? Like 4 grid coords for big object vertexes? And if one of them close to p1 do circle check for centre of big object then a deep one if succeed? Am I do it wrong? My possible solution:

    Read the article

  • Layout Columns - Equal Height

    - by Kyle
    I remember first starting out using tables for layouts and learned that I should not be doing that. I am working on a new site and can not seem to do equal height columns without using tables. Here is an example of the attempt with div tags. <div class="row"> <div class="column">column1</div> <div class="column">column2</div> <div class="column">column3</div> <div style="clear:both"></div> </div> Now what I tried with that was doing making columns float left and setting their widths to 33% which works fine, I use the clear:both div so that the row would be the size of the biggest column, but the columns will be different sizes based on how much content they have. I have found many fixes which mostly involve css hacks and just making it look like its right but that's not what I want. I thought of just doing it in javascript but then it would look different for those who choose to disable their javascript. The only true way of doing it that I can think of is using tables since the cells all have equal heights in the same row. But I know its bad to use tables. After searching forever I than came across this: http://intangiblestyle.com/lab/equal-height-columns-with-css/ What it seems to do is exactly the same as tables since its just setting its display exactly like tables. Would using that be just as bad as using tables? I honestly can't find anything else that I could do. edit @Su' I have looked into "faux columns" and do not think that is what I want. I think I would be able to implement better designs for my site using the display:table method. I posted this question because I just wasn't sure if I should since I have always heard its bad using tables in website layouts.

    Read the article

  • How to Create tree type CVL in Content server(UCM)

    - by rajeev.y.ranjan-oracle
    Steps to create tree choice list:1)Create a table "tblStates" with column "stateID" and "stateName". Click on "ADD Recommended".2) Create another table "tblCities with columns "cityID", "stateID" and "cityName".3)Then create two views on these tables namely "tblstateview" and "tblcityview".3)In "StateView" added two rows with values as JH and MH in stateID column.Jharkhand and Maharastra in stateName.4)Similarly in tblcityview added two rows with values as:BO and RA in cityID column.JH and MH in stateID columnBokaro and Mumbai in cityname column.5)Created relationship with Parentinfo "tblStates" and stateID and  childinfo with tblCities and stateID.6)Created metadata by name "Newtest"Enable option list,go to the configure ,Select use tree,Click on go edit definition 7)Tree Definition at level 1: a)Choose" tblstateView"b)Choose relation "newstatecity"At Level2:a)Choose cityView.Log out of the NativeUI and ContentUI and test the tree created by name "Newtest".

    Read the article

  • Is it reasonable to require passwords when users sign into my application through social media accounts?

    - by BrMcMullin
    I've built an application that requires users to authenticate with one or more social media accounts from either Facebook, Twitter, or LinkedIn. Edit Once the user has signed in, an 'identity' for them is maintained in the system, to which all content they create is associated. A user can associate one account from each of the supported providers with this identity. I'm concerned about how to protect potential users from connecting the wrong account to their identity in our application. /Edit There are two main scenarios that could happen: User has multiple accounts on one of the three providers, and is not logged into the one s/he desires. User comes to a public or shared computer, in which the previous user left themselves logged into one of the three providers. While I haven't encountered many examples of this myself, I'm considering requiring users to password authenticate with Facebook, Twitter, and LinkedIn whenever they are signing into our application. Is that a reasonable approach, or are there reasons why many other sites and applications don't challenge users to provide a user name and password when authorizing applications to access their social media accounts? Thanks in advance! Edit A clarification, I'm not intending to store anyone's user name and password. Rather, when a user clicks the button to sign in, with Facebook as an example, I'm considering showing an "Is this you?" type window. The idea is that a user would respond to the challenge by either signing into Facebook on the account fetched from the oauth hash, or would sign into the correct account and the oauth callback would run with the new oauth hash data.

    Read the article

  • Can someone explain the (reasons for the) implications of colum vs row major in multiplication/concatenation?

    - by sebf
    I am trying to learn how to construct view and projection matrices, and keep reaching difficulties in my implementation owing to my confusion about the two standards for matrices. I know how to multiply a matrix, and I can see that transposing before multiplication would completely change the result, hence the need to multiply in a different order. What I don't understand though is whats meant by only 'notational convention' - from the articles here and here the authors appear to assert that it makes no difference to how the matrix is stored, or transferred to the GPU, but on the second page that matrix is clearly not equivalent to how it would be laid out in memory for row-major; and if I look at a populated matrix in my program I see the translation components occupying the 4th, 8th and 12th elements. Given that: "post-multiplying with column-major matrices produces the same result as pre-multiplying with row-major matrices. " Why in the following snippet of code: Matrix4 r = t3 * t2 * t1; Matrix4 r2 = t1.Transpose() * t2.Transpose() * t3.Transpose(); Does r != r2 and why does pos3 != pos for: Vector4 pos = wvpM * new Vector4(0f, 15f, 15f, 1); Vector4 pos3 = wvpM.Transpose() * new Vector4(0f, 15f, 15f, 1); Does the multiplication process change depending on whether the matrices are row or column major, or is it just the order (for an equivalent effect?) One thing that isn't helping this become any clearer, is that when provided to DirectX, my column major WVP matrix is used successfully to transform vertices with the HLSL call: mul(vector,matrix) which should result in the vector being treated as row-major, so how can the column major matrix provided by my math library work?

    Read the article

  • What is the best way to slide a panel in WPF?

    - by Kris Erickson
    I have a fairly simple UserControl that I have made (pardon my Xaml I am just learning WPF) and I want to slide the off the screen. To do so I am animating a translate transform (I also tried making the Panel the child of a canvas and animating the X position with the same results), but the panel moves very jerkily, even on a fairly fast new computer. What is the best way to slide in and out (preferably with KeySplines so that it moves with inertia) without getting the jerkyness. I only have 8 buttons on the panel, so I didn't think it would be too much of a problem. Here is the Xaml I am using, it runs fine in Kaxaml, but it is very jerky and slow (as well as being jerkly and slow when run compiled in a WPF app). <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="1002" Height="578"> <UserControl.Resources> <Style TargetType="Button"> <Setter Property="Control.Padding" Value="4"/> <Setter Property="Control.Margin" Value="10"/> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Grid Name="backgroundGrid" Width="210" Height="210" Background="#00FFFFFF"> <Grid.BitmapEffect> <BitmapEffectGroup> <DropShadowBitmapEffect x:Name="buttonDropShadow" ShadowDepth="2"/> <OuterGlowBitmapEffect x:Name="buttonGlow" GlowColor="#A0FEDF00" GlowSize="0"/> </BitmapEffectGroup> </Grid.BitmapEffect> <Border x:Name="background" Margin="1,1,1,1" CornerRadius="15"> <Border.Background> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <LinearGradientBrush.GradientStops> <GradientStop Offset="0" Color="#FF0062B6"/> <GradientStop Offset="1" Color="#FF0089FE"/> </LinearGradientBrush.GradientStops> </LinearGradientBrush> </Border.Background> </Border> <Border Margin="1,1,1,0" BorderBrush="#FF000000" BorderThickness="1.5" CornerRadius="15"/> <ContentPresenter HorizontalAlignment="Center" Margin="{TemplateBinding Control.Padding}" VerticalAlignment="Center" Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <Canvas> <Grid x:Name="Panel1" Height="578" Canvas.Left="0" Canvas.Top="0"> <Grid.RenderTransform> <TransformGroup> <TranslateTransform x:Name="panelTranslate" X="0" Y="0"/> </TransformGroup> </Grid.RenderTransform> <Grid.RowDefinitions> <RowDefinition Height="287"/> <RowDefinition Height="287"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition x:Name="Panel1Col1"/> <ColumnDefinition x:Name="Panel1Col2"/> <ColumnDefinition x:Name="Panel1Col3"/> <ColumnDefinition x:Name="Panel1Col4"/> <!-- Set width to 0 to hide a column--> </Grid.ColumnDefinitions> <Button x:Name="Panel1Product1" Grid.Column="0" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"> <Button.Triggers> <EventTrigger RoutedEvent="Button.Click" SourceName="Panel1Product1"> <EventTrigger.Actions> <BeginStoryboard> <Storyboard> <DoubleAnimation BeginTime="00:00:00.6" Duration="0:0:3" From="0" Storyboard.TargetName="panelTranslate" Storyboard.TargetProperty="X" To="-1000"/> </Storyboard> </BeginStoryboard> </EventTrigger.Actions> </EventTrigger> </Button.Triggers> </Button> <Button x:Name="Panel1Product2" Grid.Column="0" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product3" Grid.Column="1" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product4" Grid.Column="1" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product5" Grid.Column="2" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product6" Grid.Column="2" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product7" Grid.Column="3" Grid.Row="0" HorizontalAlignment="Center" VerticalAlignment="Center"/> <Button x:Name="Panel1Product8" Grid.Column="3" Grid.Row="1" HorizontalAlignment="Center" VerticalAlignment="Center"/> </Grid> </Canvas> </UserControl>

    Read the article

  • CSS issue with elements spanning columns

    - by bigFoot
    Hi folks. Overview: I'm trying to create a relatively simple page layout detailed below and running into problems no matter how I try to approach it. Concept: - A standard-size-block layout. I'll quote unit widths: each content block is 240px square with 5px of margin around it. - A left column of fixed width of 1 unit (245px - 1 block + margin to left). No problems here. - A right column of variable width to fill the remaining space. No problems here either. - In the left column, a number of 1unit x 1unit blocks fixed down the column. Also some blank space at the top - again, not a problem. - In the right column: a number of free-floating blocks of standard unit-sizes which float around and fill the space given to them by the browser window. No problems here. - Lastly, a single element, 2 units wide, which sits half in the left column and half in the right column, and which the blocks in the right column still float around. Here be dragons. Please see here for a diagram: http://is.gd/bPUGI Problem: No matter how I approach this, it goes wrong. Below is code for my existing attempt at a solution. My current problem is that the 1x1 blocks on the right do not respect the 2x1 block, and as a result half of the 2x1 block is overwritten by a 1x1 block in the right-hand column. I'm aware that this is almost certainly an issue with position: absolute taking things out of flow. However, can't really find a way round that which doesn't just throw up another problem instead. Code: <html> <head> <title>wat</title> <style type="text/css"> body { background: #ccc; color: #000; padding: 0px 5px 5px 0px; margin: 0px; } #leftcol { width: 245px; margin-top: 490px; position: absolute; } #rightcol { left: 245px; position: absolute; } #bigblock { float: left; position: relative; margin-top: -240px; background: red; } .cblock { margin: 5px 0px 0px 5px; float: left; overflow: hidden; display: block; background: #fff; } .w1 { width: 240px; } .w2 { width: 485px; } .l1 { height: 240px; } </head> <body> <div class="cblock w2 l1" id="bigblock"> <h1>DRAGONS</h1> <p>Here be they</p> </div> <div id="leftcol"> <div class="cblock w1 l1"> <h1>Left 1</h1> <p>1x1 block</p> </div> </div> <div id="rightcol"> <div class="cblock w1 l1"> <h1>Right 1</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 2</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 3</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 4</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 5</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 6</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 7</h1> <p>1x1 block</p> </div> </div> </body> </html> Constraints: One final note that I need cross-browser compatibility, though I'm more than happy to enforce this with JS if necessary. That said, if a CSS-only solution exists, I'd be extremely happy. Thanks in advance!

    Read the article

  • text-aling:left align:left in google android browser is making the text in column around %20 of the page how this can be fixed?

    - by user981220
    Hello I have a problem with text-align: left or align: left no matter what I put the text get's stuck on the left side of the page in a big column how this can be fixed ? example of the code: <div class="maintext02"><span><p>text</p> .maintext02 { font-family:Georgia, "Times New Roman", Times, serif; color:#545454; line-height:25px; font-size:12px; width:943px; padding-top:15px; padding-bottom:15px; text-align:left; }

    Read the article

  • channel factory null on debug?

    - by Garrith
    When I try to invoke a GetData contract using wcf rest in wcf test client mode I get this message: The Address property on ChannelFactory.Endpoint was null. The ChannelFactory's Endpoint must have a valid Address specified. at System.ServiceModel.ChannelFactory.CreateEndpointAddress(ServiceEndpoint endpoint) at System.ServiceModel.ChannelFactory`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannelInternal() at System.ServiceModel.ClientBase`1.get_Channel() at Service1Client.GetData(String value) This is the config file for the host: <system.serviceModel> <services> <service name="WcfService1.Service1" behaviorConfiguration="WcfService1.Service1Behavior"> <!-- Service Endpoints --> <endpoint address="http://localhost:26535/Service1.svc" binding="webHttpBinding" contract="WcfService1.IService1" behaviorConfiguration="webHttp" > <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfService1.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> <endpointBehaviors> <behavior name="webHttp"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> </configuration> Code: [ServiceContract(Namespace = "")] public interface IService1 { //[WebInvoke(Method = "POST", UriTemplate = "Data?value={value}")] [OperationContract] [WebGet(UriTemplate = "/{value}")] string GetData(string value); [OperationContract] CompositeType GetDataUsingDataContract(CompositeType composite); // TODO: Add your service operations here } public class Service1 : IService1 { public string GetData(string value) { return string.Format("You entered: {0}", value); }

    Read the article

  • NameClaimType in ClaimsIdentity from SAML

    - by object88
    I am attempting to understand the world of WIF in context of a WCF Data Service / REST / OData server. I have a hacked up version of SelfSTS that is running inside a unit test project. When the unit tests start, it kicks off a WCF service, which generates my SAML token. This is the SAML token being generated: <saml:Assertion MajorVersion="1" MinorVersion="1" ... > <saml:Conditions>...</saml:Conditions> <saml:AttributeStatement> <saml:Subject> <saml:NameIdentifier Format="EMAIL">4bd406bf-0cf0-4dc4-8e49-57336a479ad2</saml:NameIdentifier> <saml:SubjectConfirmation>...</saml:SubjectConfirmation> </saml:Subject> <saml:Attribute AttributeName="emailaddress" AttributeNamespace="http://schemas.xmlsoap.org/ws/2005/05/identity/claims"> <saml:AttributeValue>[email protected]</saml:AttributeValue> </saml:Attribute> <saml:Attribute AttributeName="name" AttributeNamespace="http://schemas.xmlsoap.org/ws/2005/05/identity/claims"> <saml:AttributeValue>bob</saml:AttributeValue> </saml:Attribute> </saml:AttributeStatement> <ds:Signature>...</ds:Signature> </saml:Assertion> (I know the Format of my NameIdentifier isn't really EMAIL, this is something I haven't gotten to cleaning up yet.) Inside my actual server, I put some code borrowed from Pablo Cabraro / Cibrax. This code seems to run A-OK, although I confess that I don't understand what's happening. I note that later in my code, when I need to check my identity, Thread.CurrentPrincipal.Identity is an instance of Microsoft.IdentityModel.Claims.ClaimsIdentity, which has a claim for all the attributes, plus a nameidentifier claim with the value in my NameIdentifier element in saml:Subject. It also has a property NameClaimType, which points to "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name". It would make more sense if NameClaimType mapped to nameidentifier, wouldn't it? How do I make that happen? Or am I expecting the wrong thing of the name claim? Thanks!

    Read the article

  • Unable to add SSH pub key for GitHub

    - by Kaushik
    I am trying to set up a new GitHub account as part of learning ruby on rails. My OS is windows. I am having the following problem while trying to add my public key to the GitHub SSH public keys. When I put the key in the text area, supply a name, and click 'Add Key', I am taken to the Job profile page without any confirmation that the key has been added.(I am using SSH client GUI to generate RSA keys). When I come back to the SSH public keys page, I see that the key is not there. I tried this multiple times...without any result. Just as a test, I tried to ssh to the GitHub account using 'ssh -v [email protected]' and here is the verbose output: OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007 debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug1: identity file /c/Users/Administrator/.ssh/identity type -1 debug1: identity file /c/Users/Administrator/.ssh/id_rsa type -1 debug1: identity file /c/Users/Administrator/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5github2 debug1: match: OpenSSH_5.1p1 Debian-5github2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'github.com' is known and matches the RSA host key. debug1: Found key in /c/Users/Administrator/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/Administrator/.ssh/identity debug1: Trying private key: /c/Users/Administrator/.ssh/id_rsa debug1: Trying private key: /c/Users/Administrator/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). Also, why is it looking for the keys in c/Users/Administrator/.ssh/ Is there a possibility of changing this default location? EDIT: With Mozila, I get error msg: Oops! The key has already been taken. The key is invalid. It must begin with 'ssh-rsa' or 'ssh-dss'. My key looks like: ---- BEGIN SSH2 PUBLIC KEY ---- Comment: "[2048-bit rsa, Administrator@Kaushik-THINK, Sun Jan 02 2011 \ 02:40:03]" AAAAB3NzaC1yc2EAAAADAQABAAABAQDoA5xqJozKmAHTGh9hgmagsFOl2hdPzS8ZPV9Ta1 xU0JiUnHef38Rvz/5oqL1wW7SjmZbc/+tCPOfU1lg3UisFXajJhek2bjJ2qsKd4Sjax2Nj ZMYD7djPb8rokUYSKW3bdYyJHtJH+murz04UGdCcZ8HqkMTzqlh3zAIK7SIlCy+mtAi5A/ sm0JbqlNGHB6YrL1aWIcOSolIx2HWt8cWhlK77guT9dPgd0HT59Gn0uhO7UWGLrNdJut0x ian3HdvNYMUXoO/SkNlxvWRgZ1UOhaB+qf4hw5RCGcBbqP3fM4LKpurHZx4wEpgmM0e4EM +2PYdf46uxChNdBl7J5sZF ---- END SSH2 PUBLIC KEY ---- I still can't see the key..so not sure how it says it is already taken.

    Read the article

  • WCF facility : Metadata publishing for this service is currently disabled

    - by cvista
    I asked this before and got no where so i'm asking again as i'm now desperate!! Hey if i create a new wcf project i can browse the meta instantly. if I try - when using the WCF facility - i get the following: Metadata publishing for this service is currently disabled. i followed the instructions there and in a million other places and get no where. if i copy the contents of my faciltity service into the newly created project it complains that aspNetCompatibilityEnabled isnt enabled. so i enable it and then mex is disabled again and i get: Metadata publishing for this service is currently disabled. again!! this is driving me crazy - i have tried tried tried to follow every example on the web!! here is my current configuration - there is no client yet: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service name="IbzStar.WebServices.UserServices" behaviorConfiguration="ServiceBehavior"> <!-- Service Endpoints --> <endpoint address="" binding="wsHttpBinding" contract="IbzStar.WebServices.IUserServices"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> please someone help me out before my laptop gets launched into orbit!! w://

    Read the article

  • Many to Many delete in NHibernate two parents with common association

    - by Joshua Grippo
    I have 3 top level entities in my app: Circuit, Issue, Document Circuits can contain Documents and Issues can contain Documents. When I delete a Circuit, I want it to delete the documents associated with it, unless it is used by something else. I would like this same behavior with Issues. I have it working when the only association is in the same table in the db, but if it is in another table, then it fails due to foreign key constraints. ex 1(This will cascade properly, because there is only a foreign constraint from Circuit to Document) Document1 exists. Circuit1 exists and contains a reference to Document1. If I delete Circuit1 then it deletes Document1 with it. ex 2(This will cascade properly, because there is only a foreign constraint from Circuit to Document.) Document1 exists. Circuit1 exists and contains a reference to Document1. Circuit2 exists and contains a reference to Document1. If I delete Circuit1 then it is deleted, but Document1 is not deleted because Circuit2 exists. If I then delete Circuit2, then Document1 is deleted. ex 3(This will throw an error, because when it deletes the Circuit it sees that there are no other circuits that reference the document so it tries to delete the document. However it should not, because there is an Issue that has a foreign constraint to the document.) Document 1 exists. Circuit1 exists and contains a reference to Document1. Issue1 exists and contains a reference to Document1. If I delete Circuit1, then it fails, because it tries to delete Document1, but Issues1 still has a reference. DB: This think won't let upload an image, so here is the ERD to the DB: http://lh3.ggpht.com/_jZWhe7NXay8/TROJhOd7qlI/AAAAAAAAAGU/rkni3oEANvc/CircuitIssues.gif Model: public class Circuit { public virtual int CircuitID { get; set; } public virtual string CJON { get; set; } public virtual IList<Document> Documents { get; set; } } public class Issue { public virtual int IssueID { get; set; } public virtual string Summary { get; set; } public virtual IList<Model.Document> Documents { get; set; } } public class Document { public virtual int DocumentID { get; set; } public virtual string Data { get; set; } } Mapping Files: <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Circuit" table="Circuit"> <id name="CircuitID"> <column name="CircuitID" not-null="true"/> <generator class="identity" /> </id> <property name="CJON" column="CJON" type="string" not-null="true"/> <bag name="Documents" table="CircuitDocument" cascade="save-update,delete-orphan"> <key column="CircuitID"/> <many-to-many class="Document"> <column name="DocumentID" not-null="true"/> </many-to-many> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Issue" table="Issue"> <id name="IssueID"> <column name="IssueID" not-null="true"/> <generator class="identity" /> </id> <property name="Summary" column="Summary" type="string" not-null="true"/> <bag name="Documents" table="IssueDocument" cascade="save-update,delete-orphan"> <key column="IssueID"/> <many-to-many class="Document"> <column name="DocumentID" not-null="true"/> </many-to-many> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8"?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" namespace="Model" assembly="Model"> <class name="Document" table="Document"> <id name="DocumentID"> <column name="DocumentID" not-null="true"/> <generator class="identity" /> </id> <property name="Data" column="Data" type="string" not-null="true"/> </class> </hibernate-mapping> Code: using (ISession session = sessionFactory.OpenSession()) { var doc = new Model.Document() { Data = "Doc" }; var circuit = new Model.Circuit() { CJON = "circ" }; circuit.Documents = new List<Model.Document>(new Model.Document[] { doc }); var issue = new Model.Issue() { Summary = "iss" }; issue.Documents = new List<Model.Document>(new Model.Document[] { doc }); session.Save(circuit); session.Save(issue); session.Flush(); } using (ISession session = sessionFactory.OpenSession()) { foreach (var item in session.CreateCriteria<Model.Circuit>().List<Model.Circuit>()) { session.Delete(item); } //this flush fails, because there is a reference to a child document from issue session.Flush(); foreach (var item in session.CreateCriteria<Model.Issue>().List<Model.Issue>()) { session.Delete(item); } session.Flush(); }

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • NHibernate does not update entity when repository is passed by constructor

    - by Alex
    Hi everybody, I am developing with NHibernate for the first time in conjunction with ASP.NET MVC and StructureMap. The CodeCampServer serves as a great example for me. I really like the different concepts which were implemented there and I can learn a lot from it. In my controllers I use Constructur Dependency Injection to get an instance of the specific repository needed. My problem is: If I change an attribute of the customer the customer's data is not updated in the database, although Commit() is called on the transaction object (by a HttpModule). public class AccountsController : Controller { private readonly ICustomerRepository repository; public AccountsController(ICustomerRepository repository) { this.repository = repository; } public ActionResult Save(Customer customer) { Customer customerToUpdate = repository .GetById(customer.Id); customerToUpdate.GivenName = "test"; //<-- customer does not get updated in database return View(); } } On the other hand this is working: public class AccountsController : Controller { [LoadCurrentCustomer] public ActionResult Save(Customer customer) { customer.GivenName = "test"; //<-- Customer gets updated return View(); } } public class LoadCurrentCustomer : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { const string parameterName = "Customer"; if (filterContext.ActionParameters.ContainsKey(parameterName)) { if (filterContext.HttpContext.User.Identity.IsAuthenticated) { Customer CurrentCustomer = DependencyResolverFactory .GetDefault() .Resolve<IUserSession>() .GetCurrentUser(); filterContext.ActionParameters[parameterName] = CurrentCustomer; } } base.OnActionExecuting(filterContext); } } public class UserSession : IUserSession { private readonly ICustomerRepository repository; public UserSession(ICustomerRepository customerRepository) { repository = customerRepository; } public Customer GetCurrentUser() { var identity = HttpContext.Current.User.Identity; if (!identity.IsAuthenticated) { return null; } Customer customer = repository.GetByEmailAddress(identity.Name); return customer; } } I also tried to call update on the repository like the following code shows. But this leads to an NHibernateException which says "Illegal attempt to associate a collection with two open sessions". Actually there is only one. public ActionResult Save(Customer customer) { Customer customerToUpdate = repository .GetById(customer.Id); customer.GivenName = "test"; repository.Update(customerToUpdate); return View(); } Does somebody have an idea why the customer is not updated in the first example but is updated in the second example? Why does NHibernate say that there are two open sessions?

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • SQL University: Database testing and refactoring tools and examples

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Database testing and refactoring tools and examples This is the third and last part of the series and in it we’ll take a look at tools we can test and refactor with plus some an example of the both. Tools of the trade First a few thoughts about how to go about testing a database. I'm firmily against any testing tools that go into the database itself or need an extra database. Unit tests for the database and applications using the database should all be in one place using the same technology. By using database specific frameworks we fragment our tests into many places and increase test system complexity. Let’s take a look at some testing tools. 1. NUnit, xUnit, MbUnit All three are .Net testing frameworks meant to unit test .Net application. But we can test databases with them just fine. I use NUnit because I’ve always used it for work and personal projects. One day this might change. So the thing to remember is to be flexible if something better comes along. All three are quite similar and you should be able to switch between them without much problem. 2. TSQLUnit As much as this framework is helpful for the non-C# savvy folks I don’t like it for the reason I stated above. It lives in the database and thus fragments the testing infrastructure. Also it appears that it’s not being actively developed anymore. 3. DbFit I haven’t had the pleasure of trying this tool just yet but it’s on my to-do list. From what I’ve read and heard Gojko Adzic (@gojkoadzic on Twitter) has done a remarkable job with it. 4. Redgate SQL Refactor and Apex SQL Refactor Neither of these refactoring tools are free, however if you have hardcore refactoring planned they are worth while looking into. I’ve only used the Red Gate’s Refactor and was quite impressed with it. 5. Reverting the database state I’ve talked before about ways to revert a database to pre-test state after unit testing. This still holds and I haven’t changed my mind. Also make sure to read the comments as they are quite informative. I especially like the idea of setting up and tearing down the schema for each test group with NHibernate. Testing and refactoring example We’ll take a look at the simple schema and data test for a view and refactoring the SELECT * in that view. We’ll use a single table PhoneNumbers with ID and Phone columns. Then we’ll refactor the Phone column into 3 columns Prefix, Number and Suffix. Lastly we’ll remove the original Phone column. Then we’ll check how the view behaves with tests in NUnit. The comments in code explain the problem so be sure to read them. I’m assuming you know NUnit and C#. T-SQL Code C# test code USE tempdbGOCREATE TABLE PhoneNumbers( ID INT IDENTITY(1,1), Phone VARCHAR(20))GOINSERT INTO PhoneNumbers(Phone)SELECT '111 222333 444' UNION ALLSELECT '555 666777 888'GO-- notice we don't have WITH SCHEMABINDINGCREATE VIEW vPhoneNumbersAS SELECT * FROM PhoneNumbersGO-- Let's take a look at what the view returns -- If we add a new columns and rows both tests will failSELECT *FROM vPhoneNumbers GO -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- refactor to split Phone column into 3 partsALTER TABLE PhoneNumbers ADD Prefix VARCHAR(3)ALTER TABLE PhoneNumbers ADD Number VARCHAR(6)ALTER TABLE PhoneNumbers ADD Suffix VARCHAR(3)GO-- update the new columnsUPDATE PhoneNumbers SET Prefix = LEFT(Phone, 3), Number = SUBSTRING(Phone, 5, 6), Suffix = RIGHT(Phone, 3)GO-- remove the old columnALTER TABLE PhoneNumbers DROP COLUMN PhoneGO-- This returns unexpected results!-- it returns 2 columns ID and Phone even though -- we don't have a Phone column anymore.-- Notice that the data is from the Prefix column-- This is a danger of SELECT *SELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will FAIL -- for a fix we have to call sp_refreshview -- to refresh the view definitionEXEC sp_refreshview 'vPhoneNumbers'-- after the refresh the view returns 4 columns-- this breaks the input/output behavior of the database-- which refactoring MUST NOT doSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will FAIL -- DoesViewReturnCorrectData test will FAIL -- to fix the input/output behavior change problem -- we have to concat the 3 columns into one named PhoneALTER VIEW vPhoneNumbersASSELECT ID, Prefix + ' ' + Number + ' ' + Suffix AS PhoneFROM PhoneNumbersGO-- now it works as expectedSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- clean upDROP VIEW vPhoneNumbersDROP TABLE PhoneNumbers [Test]public void DoesViewReturnCoorectColumns(){ // conn is a valid SqlConnection to the server's tempdb // note the SET FMTONLY ON with which we return only schema and no data using (SqlCommand cmd = new SqlCommand("SET FMTONLY ON; SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned schema: number of columns, column names and data types Assert.AreEqual(dt.Columns.Count, 2); Assert.AreEqual(dt.Columns[0].Caption, "ID"); Assert.AreEqual(dt.Columns[0].DataType, typeof(int)); Assert.AreEqual(dt.Columns[1].Caption, "Phone"); Assert.AreEqual(dt.Columns[1].DataType, typeof(string)); }} [Test]public void DoesViewReturnCorrectData(){ // conn is a valid SqlConnection to the server's tempdb using (SqlCommand cmd = new SqlCommand("SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned data: number of rows and their values Assert.AreEqual(dt.Rows.Count, 2); Assert.AreEqual(dt.Rows[0]["ID"], 1); Assert.AreEqual(dt.Rows[0]["Phone"], "111 222333 444"); Assert.AreEqual(dt.Rows[1]["ID"], 2); Assert.AreEqual(dt.Rows[1]["Phone"], "555 666777 888"); }}   With this simple example we’ve seen how a very simple schema can cause a lot of problems in the whole application/database system if it doesn’t have tests. Imagine what would happen if some outside process would depend on that view. It would get wrong data and propagate it silently throughout the system. And that is not good. So have tests at least for the crucial parts of your systems. And with that we conclude the Database Testing and Refactoring week at SQL University. Hope you learned something new and enjoy the learning weeks to come. Have fun!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Control-Break Style ADF Table - Comparing Values with Previous Row

    - by Steven Davelaar
    Sometimes you need to display data in an ADF Faces table in a control-break layout style, where rows should be "indented" when the break column has the same value as in the previous row. In the screen shot below, you see how the table breaks on both the RegionId column as well as the CountryId column. To implement this I didn't use fancy SQL statements. The table is based on a straightforward Locations ViewObject that is based on the Locations entity object and the Countries reference entity object, and the join query was automatically created by adding the reference EO. To get the indentation in the ADF Faces table, we simple use two rendered properties on the RegionId and CountryId outputText items:  <af:column sortProperty="RegionId" sortable="false"            headerText="#{bindings.LocationsView1.hints.RegionId.label}"            id="c5">   <af:outputText value="#{row.RegionId}" id="ot2"                  rendered="#{!CompareWithPreviousRowBean['RegionId']}">     <af:convertNumber groupingUsed="false"                       pattern="#{bindings.LocationsView1.hints.RegionId.format}"/>   </af:outputText> </af:column> <af:column sortProperty="CountryId" sortable="false"            headerText="#{bindings.LocationsView1.hints.CountryId.label}"            id="c1">   <af:outputText value="#{row.CountryId}" id="ot5"                  rendered="#{!CompareWithPreviousRowBean['CountryId']}"/> </af:column> The CompareWithPreviousRowBean managed bean is defined in request scope and is a generic bean that can be used for all the tables in your application that needs this layout style. As you can see the bean is a Map-style bean where we pass in the name of the attribute that should be compared with the previous row. The get method in the bean that is called returns boolean false when the attribute has the same value in the same row. Here is the code of the get method:  public Object get(Object key) {   String attrName = (String) key;   boolean isSame = false;   // get the currently processed row, using row expression #{row}   JUCtrlHierNodeBinding row = (JUCtrlHierNodeBinding) resolveExpression(getRowExpression());   JUCtrlHierBinding tableBinding = row.getHierBinding();   int rowRangeIndex = row.getViewObject().getRangeIndexOf(row.getRow());   Object currentAttrValue = row.getRow().getAttribute(attrName);   if (rowRangeIndex > 0)   {     Object previousAttrValue = tableBinding.getAttributeFromRow(rowRangeIndex - 1, attrName);     isSame = currentAttrValue != null && currentAttrValue.equals(previousAttrValue);   }   else if (tableBinding.getRangeStart() > 0)   {     // previous row is in previous range, we create separate rowset iterator,     // so we can change the range start without messing up the table rendering which uses     // the default rowset iterator     int absoluteIndexPreviousRow = tableBinding.getRangeStart() - 1;     RowSetIterator rsi = null;     try     {       rsi = tableBinding.getViewObject().getRowSet().createRowSetIterator(null);       rsi.setRangeStart(absoluteIndexPreviousRow);       Row previousRow = rsi.getRowAtRangeIndex(0);       Object previousAttrValue = previousRow.getAttribute(attrName);       isSame = currentAttrValue != null && currentAttrValue.equals(previousAttrValue);     }     finally     {       rsi.closeRowSetIterator();     }   }   return isSame; } The row expression defaults to #{row} but this can be changed through the rowExpression  managed property of the bean.  You can download the sample application here.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • JPA 2.1 Schema Generation (TOTD #187)

    - by arungupta
    This blog explained some of the key features of JPA 2.1 earlier. Since then Schema Generation has been added to JPA 2.1. This Tip Of The Day (TOTD) will provide more details about this new feature in JPA 2.1. Schema Generation refers to generation of database artifacts like tables, indexes, and constraints in a database schema. It may or may not involve generation of a proper database schema depending upon the credentials and authorization of the user. This helps in prototyping of your application where the required artifacts are generated either prior to application deployment or as part of EntityManagerFactory creation. This is also useful in environments that require provisioning database on demand, e.g. in a cloud. This feature will allow your JPA domain object model to be directly generated in a database. The generated schema may need to be tuned for actual production environment. This usecase is supported by allowing the schema generation to occur into DDL scripts which can then be further tuned by a DBA. The following set of properties in persistence.xml or specified during EntityManagerFactory creation controls the behaviour of schema generation. Property Name Purpose Values javax.persistence.schema-generation-action Controls action to be taken by persistence provider "none", "create", "drop-and-create", "drop" javax.persistence.schema-generation-target Controls whehter schema to be created in database, whether DDL scripts are to be created, or both "database", "scripts", "database-and-scripts" javax.persistence.ddl-create-script-target, javax.persistence.ddl-drop-script-target Controls target locations for writing of scripts. Writers are pre-configured for the persistence provider. Need to be specified only if scripts are to be generated. java.io.Writer (e.g. MyWriter.class) or URL strings javax.persistence.ddl-create-script-source, javax.persistence.ddl-drop-script-source Specifies locations from which DDL scripts are to be read. Readers are pre-configured for the persistence provider. java.io.Reader (e.g. MyReader.class) or URL strings javax.persistence.sql-load-script-source Specifies location of SQL bulk load script. java.io.Reader (e.g. MyReader.class) or URL string javax.persistence.schema-generation-connection JDBC connection to be used for schema generation javax.persistence.database-product-name, javax.persistence.database-major-version, javax.persistence.database-minor-version Needed if scripts are to be generated and no connection to target database. Values are those obtained from JDBC DatabaseMetaData. javax.persistence.create-database-schemas Whether Persistence Provider need to create schema in addition to creating database objects such as tables, sequences, constraints, etc. "true", "false" Section 11.2 in the JPA 2.1 specification defines the annotations used for schema generation process. For example, @Table, @Column, @CollectionTable, @JoinTable, @JoinColumn, are used to define the generated schema. Several layers of defaulting may be involved. For example, the table name is defaulted from entity name and entity name (which can be specified explicitly as well) is defaulted from the class name. However annotations may be used to override or customize the values. The following entity class: @Entity public class Employee {    @Id private int id;    private String name;     . . .     @ManyToOne     private Department dept; } is generated in the database with the following attributes: Maps to EMPLOYEE table in default schema "id" field is mapped to ID column as primary key "name" is mapped to NAME column with a default VARCHAR(255). The length of this field can be easily tuned using @Column. @ManyToOne is mapped to DEPT_ID foreign key column. Can be customized using JOIN_COLUMN. In addition to these properties, couple of new annotations are added to JPA 2.1: @Index - An index for the primary key is generated by default in a database. This new annotation will allow to define additional indexes, over a single or multiple columns, for a better performance. This is specified as part of @Table, @SecondaryTable, @CollectionTable, @JoinTable, and @TableGenerator. For example: @Table(indexes = {@Index(columnList="NAME"), @Index(columnList="DEPT_ID DESC")})@Entity public class Employee {    . . .} The generated table will have a default index on the primary key. In addition, two new indexes are defined on the NAME column (default ascending) and the foreign key that maps to the department in descending order. @ForeignKey - It is used to define foreign key constraint or to otherwise override or disable the persistence provider's default foreign key definition. Can be specified as part of JoinColumn(s), MapKeyJoinColumn(s), PrimaryKeyJoinColumn(s). For example: @Entity public class Employee {    @Id private int id;    private String name;    @ManyToOne    @JoinColumn(foreignKey=@ForeignKey(foreignKeyDefinition="FOREIGN KEY (MANAGER_ID) REFERENCES MANAGER"))    private Manager manager;     . . . } In this entity, the employee's manager is mapped by MANAGER_ID column in the MANAGER table. The value of foreignKeyDefinition would be a database specific string. A complete replay of Linda's talk at JavaOne 2012 can be seen here (click on CON4212_mp4_4212_001 in Media). These features will be available in GlassFish 4 promoted builds in the near future. JPA 2.1 will be delivered as part of Java EE 7. The different components in the Java EE 7 platform are tracked here. JPA 2.1 Expert Group has released Early Draft 2 of the specification. Section 9.4 and 11.2 provide all details about Schema Generation. The latest javadocs can be obtained from here. And the JPA EG would appreciate feedback.

    Read the article

  • Framework 4 Features: Login Id Support

    - by Anthony Shorten
    Given that Oracle Utilities Application Framework 4 is available as part of Mobile Work Force Management and other product progressively I am preparing a number of short but sweet blog entries highlighting some of the new functionality that has been implemented. This is the first entry and it is on a new security feature called Login Id. In past releases of the Oracle Utilities Application Framework, the userid used for authentication and authorization was limited to eight (8) characters in length. This mirrored what the market required in the past with LAN userids and even legacy userids being that length. The technology market has since progressed to longer userid lengths. It is very common to hear that email addresses are being used as credentials for production systems. To achieve this in past versions of the Oracle Utilities Application Framework, sites had to introduce a short userid (8 characters in length) as an alias in your preferred security store. You then configured your J2EE Web Application Server to use the alias as credentials. This sometimes was a standard feaure of the security store and/or the J2EE Web Application Server, if you were lucky. If not, some java code has to be written to implement the solution. In Oracle Utilities Application Framework 4 we introduced a new attribute on the user object called Login Id. The Login Id can be up to 256 characters in length and is an alternative to the existing userid stored on the user object. This means the Oracle Utilities Application Framework can support both long and short userids. For backward compatibility we use the Login Id for authentication but the short userid for authorization and auditing. The user object within the Oracle Utilities Application Framework holds the translation. Backward compatibility is always a consideration in any of our designs for future or changed functionality. You will see reference to this fact in the blog entries I will be composing over the next few months. We have also thought about the flexibility in implementing this feature. The Login Id can be the same value of the Userid (the default for backward compatibility) or can be different. Both the Login Id and Userid have to be unique. This avoids sharing of credentials and is also backward compatible. You can manually enter the Login Id or provision it from Oracle Identity Manager (or other tool). If you use the Login Id only, then we will not autogenerate a short userid automatically as the rules for this can vary from site to site. You have a number of options there. Most Identity provisioning tools can generate a short userid at user creation time and this can be used. If you do not use provisioning tools, then you can write a class extension using the SDK to autoegenerate the userid based upon your sites preference. When we designed the feature there were lots of styles of generating userids (random, initial and surname, numbers etc). We could not really see a clear winner in that respect so we just allowed the extension to be inserted in if necessary. Most customers indicated to us that identity provisioning was the preferred way. This is why we released an Oracle Identity Manager integration with the framework. The Login id is case sensitive now which was not supported under userid. The introduction of the Login Id allows the product to offer flexible options when configuring security whilst maintaining backward compatibility.

    Read the article

  • October 2013 Fusion Middleware (FMW) Proactive Patches released

    - by Irina
    We are glad to announce that the following Fusion Middleware (FMW) Proactive  patches were released on October 15, 2013.Bundle PatchesBundle patches are collections of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series includes the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. Oracle Identity Management Suite Bundle Patch 11.1.1.5.5 consisting of Oracle Identity Manager (OIM) 11.1.1.5.9 bundle patch Oracle Access Manager (OAM) 11.1.1.5.6 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.1.5.2 bundle patch. Oracle Entitlement Server (OES) 11.1.1.5.4 bundle patch. Oracle Identity Management Suite Bundle Patch 11.1.2.0.4 consisting of Oracle Access Manager (OAM) 11.1.2.0.4 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.2.0.2 bundle patch. Oracle Entitlement Server (OES) 11.1.2.0.2 bundle patch. Oracle Identity Analytics (OIA ) 11.1.1.5.6  bundle patch. Oracle GlassFish Server (OGFS) 2.1.1.22, 3.0.1.8 and 3.1.2.7 bundle patches. Oracle iPlanet Web Server (OiWS) 7.0.18 bundle patch Oracle SOA Suite (SOA) 11.1.1.7.1 bundle patch Oracle WebCenter Portal (WCP) 11.1.1.8.1 bundle patch Sun Role Manager (SRM) 4.1.7 and 5.0.3.2 bundle patches. Patch Set Updates (PSU)Patch Set Updates (PSU)  are collections of well controlled, well tested critical bug fixes for a specific product  that have been proven in customer environments. PSUs  may include security contents but no  enhancements are included. These are cumulative in nature meaning the latest PSU  in a particular series includes the contents of the previous PSUs  released. Oracle Exalogic 2.0.3.0.4 Physical Linux x86-64 and 2.0.4.0.4 Physical Solaris x86-64 PSUs. Oracle WebLogic Server 10.3.6.0.6 and 12.1.1.0.6 PSUs. Critical Patch Update (CPU)The Critical Patch Update program is Oracle's quarterly release of security fixes.The following additional patches were released as part of Oracle's Critical Patch Update program: Oracle JDeveloper 11.1.2.3.0, 11.1.2.4.0 and 12.1.2.0.0 Oracle Outside In Technology 8.4.0 and  8.4.1 Oracle Portal 11.1.1.6.0 Oracle Security Service  11.1.1.6.0, 11.1.1.7.0 and 12.1.2.0.0 Oracle WebCache 11.1.1.6.0 and 11.1.1.7.0 Oracle WebCenter Content 10.1.3.5.1, 11.1.1.6.0, 11.1.1.7.0 and 11.1.1.8.0 Oracle WebServices 10.1.3.5.0 and 11.1.1.6.0 For more information: Master Notes on Fusion Middleware Proactive Patching PSU and CPU October 2013  Availability Document Critical Patch Update Advisory -  October 2013

    Read the article

  • October 2013 FMW Proactive Patches Released

    - by mustafakaya
    The following Fusion Middleware (FMW) Proactive  patches were released on October 15, 2013. Bundle Patches : Bundle patches are collections of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series includes the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. Oracle Identity Management Suite Bundle Patch 11.1.1.5.5 consisting of Oracle Identity Manager (OIM) 11.1.1.5.9 bundle patch Oracle Access Manager (OAM) 11.1.1.5.6 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.1.5.2 bundle patch. Oracle Entitlement Server (OES) 11.1.1.5.4 bundle patch. Oracle Identity Management Suite Bundle Patch 11.1.2.0.4 consisting of Oracle Access Manager (OAM) 11.1.2.0.4 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.2.0.2 bundle patch. Oracle Entitlement Server (OES) 11.1.2.0.2 bundle patch. Oracle Identity Analytics (OIA ) 11.1.1.5.6  bundle patch. Oracle GlassFish Server (OGFS) 2.1.1.22, 3.0.1.8 and 3.1.2.7 bundle patches. Oracle iPlanet Web Server (OiWS) 7.0.18 bundle patch Oracle SOA Suite (SOA) 11.1.1.7.1 bundle patch Oracle WebCenter Portal (WCP) 11.1.1.8.1 bundle patch Sun Role Manager (SRM) 4.1.7 and 5.0.3.2 bundle patches. Patch Set Updates (PSU) Patch Set Updates (PSU)  are collections of well controlled, well tested critical bug fixes for a specific product  that have been proven in customer environments. PSUs  may include security contents but no  enhancements are included. These are cumulative in nature meaning the latest PSU  in a particular series includes the contents of the previous PSUs  released.  Oracle Exalogic 2.0.3.0.4 Physical Linux x86-64 and 2.0.4.0.4 Physical Solaris x86-64 PSUs. Oracle WebLogic Server 10.3.6.0.6 and 12.1.1.0.6 PSUs. Critical Patch Update (CPU) : The Critical Patch Update program is Oracle's quarterly release of security fixes. The following additional patches were released as part of Oracle's Critical Patch Update program: Oracle JDeveloper 11.1.2.3.0, 11.1.2.4.0 and 12.1.2.0.0 Oracle Outside In Technology 8.4.0 and  8.4.1 Oracle Portal 11.1.1.6.0 Oracle Security Service  11.1.1.6.0, 11.1.1.7.0 and 12.1.2.0.0 Oracle WebCache 11.1.1.6.0 and 11.1.1.7.0 Oracle WebCenter Content 10.1.3.5.1, 11.1.1.6.0, 11.1.1.7.0 and 11.1.1.8.0 Oracle WebServices 10.1.3.5.0 and 11.1.1.6.0 For more information; Master Notes on Fusion Middleware Proactive Patching. PSU and CPU October 2013  Availability Document Critical Patch Update Advisory -  October 2013 

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >