Search Results

Search found 12215 results on 489 pages for 'identity column'.

Page 106/489 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • How do I drop a foerign key in mssql?

    - by mmattax
    I have created a foreign key (in mssql / tsql) by: alter table company add CountryID varchar(3); alter table company add constraint Company_CountryID_FK foreign key(CountryID) references Country; I then run this query: alter table company drop column CountryID; and I get this error: Msg 5074, Level 16, State 4, Line 2 The object 'Company_CountryID_FK' is dependent on column 'CountryID'. Msg 4922, Level 16, State 9, Line 2 ALTER TABLE DROP COLUMN CountryID failed because one or more objects access this column. I have tried this, yet it does not seem to work: alter table company drop foreign key Company_CountryID_FK; alter table company drop column CountryID; What do I need to do to drop the CountryID column? Thanks.

    Read the article

  • Hibernate not Loading a class

    - by Noor
    Hi, I have a class Auction that contains a Class Item and Users but when I am getting the class, the class item and Users are not being loaded. Auction Class Mapping File: <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated Dec 28, 2010 9:14:12 PM by Hibernate Tools 3.4.0.Beta1 --> <hibernate-mapping> <class name="com.BiddingSystem.Models.Auction" table="AUCTION"> <id name="AuctionId" type="long"> <column name="AUCTIONID" /> <generator class="native" /> </id> <property name="StartTime" type="java.util.Date"> <column name="STARTTIME" /> </property> <property name="EndTime" type="java.util.Date"> <column name="ENDTIME" /> </property> <property name="StartingBid" type="long"> <column name="STARTINGBID" /> </property> <property name="MinIncrement" type="long"> <column name="MININCREMENT" /> </property> <many-to-one name="CurrentItem" class="com.BiddingSystem.Models.Item" fetch="join" cascade="all"> <column name="ItemId" /> </many-to-one> <property name="AuctionStatus" type="java.lang.String"> <column name="AUCTIONSTATUS" /> </property> <property name="BestBid" type="long"> <column name="BESTBID" /> </property> <many-to-one name="User" class="com.BiddingSystem.Models.Users" fetch="join"> <column name="UserId" /> </many-to-one> </class> </hibernate-mapping> When I am doing this: Query query=session.createQuery("from Auction where UserId="+UserId); List <Auction> AllAuctions= new LinkedList<Auction>(query.list()); The Users and Item are null

    Read the article

  • Removing certain characters in all rows that match a regex?

    - by user001
    I'd like to change {foo, {bar}, foobar} to {foo, bar, foobar} in all rows that match '{.*{'. I.e. remove all curly braces { and } except the outer most pair. So doing mysql -h $H -u $U -p$P $DB -B -e "SELECT id FROM t WHERE col REGEXP '{.*{'" > bad.txt selects all the rows that will need this substitution. How do I make this substitution very quickly? EDIT: Could I do it by update table set column = REPLACE(column,'{',''); Then restore the out most pair update table set column = REPLACE(column,'^','{'); update table set column = REPLACE(column,'$','}');

    Read the article

  • Splitting values into groups evenly

    - by Paul Knopf
    Let me try to explain the situation the best I can. Lets say I have 3 values 1, 2, 3 I tell an algorithm to split this values into x columns. Lets say x = 2 for clarification. The algorithm determines that the group of values is best put into two columns the following way. 1st column 2nd column --------------------------- 1 3 2 Each column has an even number (totals, not literals) value. Now lets say I have the following values 7, 8, 3, 1, 4 I tell the algorithm that I want the values split into 3 columns. The algorithm now tells me that the following is the best fit. 1st column 2nd column 3rd column 8 7 3 1 4 Notice how the columns arent quiet even, but it is as close as it can get. A little over and a little under is considered ok, as long as the list is AS CLOSE TO EVEN AS IT CAN BE. Anybody got any suggestions? Know any good methods of doing this?

    Read the article

  • Duplication in mysql/php

    - by user1334095
    I have a problem in the code.i have two tables 'sms' and 'bd_paid_bribe'.sms table has a column 'Message' and bd_paid_bribe table has a column 'c_addi_info'.when i execute the code first time all the values of Message column are inserted into c_addi_info column.when i enter a record for the second time instead of inserting the new record, all the records of Message column are inserted into bd_paid_bribe column.can u modify the code and provide a solution to avoid duplication and to insert only the newly added record. <?php $con=mysql_connect('localhost','root',''); if(!$con) { die("couldn't connect"); } mysql_select_db("ipab2",$con); $rs2=mysql_query(" select max(sms_index) from tab3"); do { $rs=mysql_query("insert into tab3(sms_index)select max(sms_index) from sms"); $rs3=mysql_query("SELECT max(sms_index) FROM sms"); $rs1=mysql_query("insert into bd_paid_bribe(c_addi_info) select Message from sms "); }while($rs2>$rs3); ?>

    Read the article

  • php/mysql question, retrieving information

    - by chance
    Hey SO, I have a question, pretty new at PHP and MySql but was curious is it possible to display only the most recent column to a table, from a database? I know it is possible to display a list of all information, or if you have just one column of information in your table to display that using the Here is my code for my basic script (just loads 1 column of information) I would like it to only print out the recent column i added to the table $con = mysql_connect ("localhost","username","password"); if (!$con) { die ('Could not connect:' . mysql_error()0; } mysql_select_db("db_name", $con); $result = mysql_query(Select column FROM table"); while($row = mysql_fetch_array($result)) { echo $row['column']; } mysql_close($con); * keep in mind, im not looking for someone to shoot me my code with it completed, though it would be handy, however just giving me the options of what I have to do, or what the script is called would be plentiful enough for my research, thanks!

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • Finding the groups of a user in WLS with OPSS

    - by user12587121
    How to find the group memberships for a user from a web application running in Weblogic server ?  This is useful for building up the profile of the user for security purposes for example. WLS as a container offers an identity store service which applications can access to query and manage identities known to the container.  This article for example shows how to recover the groups of the current user, but how can we find the same information for an arbitrary user ? It is the Oracle Platform for Securtiy Services (OPSS) that looks after the identity store in WLS and so it is in the OPSS APIs that we can find the way to recover this information. This is explained in the following documents.  Starting from the FMW 11.1.1.5 book list, with the Security Overview document we can see how WLS uses OPSS: Proceeding to the more detailed Application Security document, we find this list of useful references for security in FMW. We can follow on into the User/Role API javadoc. The Application Security document explains how to ensure that the identity store is configured appropriately to allow the OPSS APIs to work.  We must verify that the jps-config.xml file where the application  is deployed has it's identity store configured--look for the following elements in that file: <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider" class="oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider">             <description>LDAP-based IdentityStore Provider</description>  </serviceProvider> <serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">             <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>             <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/></serviceInstance> <serviceInstanceRef ref="idstore.ldap"/> The document contains a code sample for using the identity store here. Once we have the identity store reference we can recover the user's group memberships using the RoleManager interface:             RoleManager roleManager = idStore.getRoleManager();            SearchResponse grantedRoles = null;            try{                System.out.println("Retrieving granted WLS roles for user " + userPrincipal.getName());                grantedRoles = roleManager.getGrantedRoles(userPrincipal, false);                while( grantedRoles.hasNext()){                      Identity id = grantedRoles.next();                      System.out.println("  disp name=" + id.getDisplayName() +                                  " Name=" + id.getName() +                                  " Principal=" + id.getPrincipal() +                                  "Unique Name=" + id.getUniqueName());                     // Here, we must use WLSGroupImpl() to build the Principal otherwise                     // OES does not recognize it.                      retSubject.getPrincipals().add(new WLSGroupImpl(id.getPrincipal().getName()));                 }            }catch(Exception ex) {                System.out.println("Error getting roles for user " + ex.getMessage());                ex.printStackTrace();            }        }catch(Exception ex) {            System.out.println("OESGateway: Got exception instantiating idstore reference");        } This small JDeveloper project has a simple servlet that executes a request for the user weblogic's roles on executing a get on the default URL.  The full code to recover a user's goups is in the getSubjectWithRoles() method in the project.

    Read the article

  • SQL SERVER – Difference Between DATETIME and DATETIME2

    - by pinaldave
    Yesterday I have written a very quick blog post on SQL SERVER – Difference Between GETDATE and SYSDATETIME and I got tremendous response for the same. I suggest you read that blog post before continuing this blog post today. I had asked people to honestly take part and share their view about above two system function. There are few emails as well few comments on the blog post asking question how did I come to know the difference between the same. The answer is real world issues. I was called in for performance tuning consultancy where I was asked very strange question by one developer. Here is the situation he was facing. System had a single table with two different column of datetime. One column was datelastmodified and second column was datefirstmodified. One of the column was DATETIME and another was DATETIME2. Developer was populating them with SYSDATETIME respectively. He was always thinking that the value inserted in the table will be the same. This table was only accessed by INSERT statement and there was no updates done over it in application.One fine day he ran distinct on both of this column and was in for surprise. He always thought that both of the table will have same data, but in fact they had very different data. He presented this scenario to me. I said this can not be possible but when looked at the resultset, I had to agree with him. Here is the simple script generated to demonstrate the problem he was facing. This is just a sample of original table. DECLARE @Intveral INT SET @Intveral = 10000 CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2) WHILE (@Intveral > 0) BEGIN INSERT #TimeTable (FirstDate, LastDate) VALUES (SYSDATETIME(), SYSDATETIME()) SET @Intveral = @Intveral - 1 END GO SELECT COUNT(DISTINCT FirstDate) D_GETDATE, COUNT(DISTINCT LastDate) D_SYSGETDATE FROM #TimeTable GO SELECT DISTINCT a.FirstDate, b.LastDate FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = b.LastDate GO SELECT * FROM #TimeTable GO DROP TABLE #TimeTable GO Let us see the resultset. You can clearly see from result that SYSDATETIME() does not populate the same value in the both of the field. In fact the value is either rounded down or rounded up in the field which is DATETIME. Event though we are populating the same value, the values are totally different in both the column resulting the SELF JOIN fail and display different DISTINCT values. The best policy is if you are using DATETIME use GETDATE() and if you are suing DATETIME2 use SYSDATETIME() to populate them with current date and time to accurately address the precision. As DATETIME2 is introduced in SQL Server 2008, above script will only work with SQL SErver 2008 and later versions. I hope I have answered few questions asked yesterday. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Dynamic Unpivot : SSIS Nugget

    - by jamiet
    A question on the SSIS forum earlier today asked: I need to dynamically unpivot some set of columns in my source file. Every month there is one new column and its set of Values. I want to unpivot it without editing my SSIS packages that is deployed Let’s be clear about what we mean by Unpivot. It is a normalisation technique that basically converts columns into rows. By way of example it converts something like this: AccountCode Jan Feb Mar AC1 100.00 150.00 125.00 AC2 45.00 75.50 90.00 into something like this: AccountCode Month Amount AC1 Jan 100.00 AC1 Feb 150.00 AC1 Mar 125.00 AC2 Jan 45.00 AC2 Feb 75.50 AC2 Mar 90.00 The Unpivot transformation in SSIS is perfectly capable of carrying out the operation defined in this example however in the case outlined in the aforementioned forum thread the problem was a little bit different. I interpreted it to mean that the number of columns could change and in that scenario the Unpivot transformation (and indeed the SSIS dataflow in general) is rendered useless because it expects that the number of columns will not change from what is specified at design-time. There is a workaround however. Assuming all of the columns that CAN exist will appear at the end of the rows, we can (1) import all of the columns in the file as just a single column, (2) use a script component to loop over all the values in that “column” and (3) output each one as a column all of its own. Let’s go over that in a bit more detail.   I’ve prepared a data file that shows some data that we want to unpivot which shows some customers and their mythical shopping lists (it has column names in the first row): We use a Flat File Connection Manager to specify the format of our data file to SSIS: and a Flat File Source Adapter to put it into the dataflow (no need a for a screenshot of that one – its very basic). Notice that the values that we want to unpivot all exist in a column called [Groceries]. Now onto the script component where the real work goes on, although the code is pretty simple: Here I show a screenshot of this executing along with some data viewers. As you can see we have successfully pulled out all of the values into a row all of their own thus accomplishing the Dynamic Unpivot that the forum poster was after. If you want to run the demo for yourself then I have uploaded the demo package and source file up to my SkyDrive: http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100529/Dynamic%20Unpivot.zip Simply extract the two files into a folder, make sure the Connection Manager is pointing to the file, and execute! Hope this is useful. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Adding an expression based image in a client report definition file (RDLC)

    - by rajbk
    In previous posts, I showed you how to create a report using Visual Studio 2010 and how to add a hyperlink to the report.  In this post, I show you how to add an expression based image to each row of the report. This similar to displaying a checkbox column for Boolean values.  A sample project is attached to the bottom of this post. To start off, download the project we created earlier from here.  The report we created had a “Discontinued” column of type Boolean. We are going to change it to display an “available” icon or “unavailable” icon based on the “Discontinued” row value.    Load the project and double click on Products.rdlc. With the report design surface active, you will see the “Report Data” tool window. Right click on the Images folder and select “Add Image..”   Add the available_icon.png and discontinued_icon.png images (the sample project at the end of this post has the icon png files)    You can see the images we added in the “Report Data” tool window.   Drag and drop the available_icon into the “Discontinued” column row (not the header) We get a dialog box which allows us to set the image properties. We will add an expression that specifies the image to display based the “Discontinued” value from the Product table. Click on the expression (fx) button.   Add the following expression : = IIf(Fields!Discontinued.Value = True, “discontinued_icon”, “available_icon”)   Save and exit all dialog boxes. In the report design surface, resize the column header and change the text from “Discontinued” to “In Production”.   (Optional) Right click on the image cell (not header) , go to “Image Properties..” and offset it by 5pt from the left. (Optional) Change the border color since it is not set by default for image columns. We are done adding our image column! Compile the application and run it. You will see that the “In Production” column has red ‘x’ icons for discontinued products. Download the VS 2010 sample project NorthwindReportsImage.zip Other Posts Adding a hyperlink in a client report definition file (RDLC) Rendering an RDLC directly to the Response stream in ASP.NET MVC ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager Localization in ASP.NET MVC 2 using ModelMetadata Setting up Visual Studio 2010 to step into Microsoft .NET Source Code Running ASP.NET Webforms and ASP.NET MVC side by side Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework

    Read the article

  • Customize the Five Windows Folder Templates

    - by Mark Virtue
    Are you’re particular about the way Windows Explorer presents each folder’s contents? Here we show you how to take advantage of Explorer’s built-in templates, which cuts down the time it takes to do customizations. Note: The techniques in this article apply to Windows XP, Vista, and Windows 7. When opening a folder for the first time in Windows Explorer, we are presented with a standard default view of the files and folders in that folder. It may be that the items are presented are perfectly fine, but on the other hand, we may want to customize the view.  The aspects of it that we can customize are the following: The display type (list view, details, tiles, thumbnails, etc) Which columns are displayed, and in which order The widths of the visible columns The order in which the files and folders are sorted Any file groupings Thankfully, Windows offers us a shortcut.  A particular folder’s settings can be used as a “template” for other, similar folders.  In fact, we can store up to five separate sets of folder presentation configurations.  Once we save the settings for a particular template, that template can then be applied to other folders. Customize Your First Folder We’ll start by setting up the first of our templates – the default one.  Once we create this template and apply it, the vast majority of the folders in our file system will change to match it, so it’s important that we set it up very carefully.  The first step in creating and applying the template is to customize one folder with the settings that all the rest will have. Choose a folder that is typical of the folders that you wish to have this default template.  Select it in Windows Explorer.  To ensure that it is a suitable candidate, right-click the folder name and select Properties, then go to the Customize tab.  Ensure that this folder is marked as General Items.  If it is not, either choose a different folder or select General Items from the list. Click OK.  Now we’re ready to customize our first folder. Changing the way one single folder is presented is straightforward.  We start with the folder’s display type.  Click the Change your view button in the top-right corner of every Explorer window. Each time you click the button, the folder’s view cycles to the next view type.  Alternatively you can click the little down-arrow next to the button to see all the display types at once, and select the one you want. Click the view you want, or drag the slider next to the one you want. If you have chosen Details, then the next thing you may wish to change is which columns are displayed, and the order of these.  To choose which columns are displayed, simply right-click on any column heading.  A list of the columns currently being display appears. Simply uncheck a column if you don’t want it displayed, and check the columns that you want displayed.  If you want some information displayed about your files that is not listed here, then click the More… button for a full list of file attributes. There’s a lot of them! To change the order of the columns that are currently being displayed, simply click on a column heading and drag it to where you think it should be.  To change the width of a column, click the line that represents the right-hand edge of the column and drag it left or right. To sort by a column, click once on that column.  To reverse the sort-order, click that same column again. To change the groupings of the files in the folder, right-click in a blank area of the folder, select Group by, and select the appropriate column. Apply This Default Template to All Similar Folders Once you have the folder exactly the way you want it, we now use this folder as our default template for most of the folders in our file system.  To do this, ensure that you are still in the folder you just customized, and then, from the Organize menu in Explorer, click on Folder and search options. Then select the View tab and click the Apply to Folders button. After you’ve clicked OK, visit some of the other folders in your file system.  You should see that most have taken on these new settings. What we’ve just done, in effect, is we have customized the General Items template.  This is one of five templates that Windows Explorer uses to display folder contents.  The five templates are called (in Windows 7): General Items Documents Pictures Music Videos When a folder is opened, Windows Explorer examines the contents to see if it can automatically determine which folder template to use to display the folder contents.  If it is not obvious that the folder contents falls into any of the last four templates, then Windows Explorer chooses the General Items template.  That’s why most of the folders in your file system are shown using the General Items template. Changing the Other Four Templates If you want to adjust the other four templates, the process is very similar to what we’ve just done.  If you wanted to change the “Music” template, for example, the steps would be as follows: Select a folder that contains music items Apply the existing Music template to the folder (even if it doesn’t look like you want it to) Customize the folder to your personal preferences Apply the new template to all “Music” folders A fifth step would be:  When you open a folder that contains music items but is not automatically displayed using the Music template, you manually select the Music template for that folder. First, select a folder that contains music items.  It will probably be displayed using the existing Music template: Next, ensure that it is using the Music template.  If it’s not, then manually select the Music template. Next, customize the folder to suit your personal preferences (here we’ve added a couple of columns, and sorted by Artist). Now we can set this view to be our Music template.  Choose Organize, then the View tab, and click the Apply to Folders button. Note: The only folders that will inherit these settings are the ones that are currently (or will soon be) using the Music template. Now, if you have any folder that contains music items, and you want it to inherit all of these settings, then right-click the folder name, choose Properties, and select that this folder should use the Music template.  You can also cehck the box entitled Also apply this template to all subfolders if you want to save yourself even more time with all the sub-folders. Conclusion It’s neat to be able to set up templates for your folder views like this.  It’s a shame that Microsoft didn’t take the concept just a little further and allow you to create as many templates as you want. Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesCustomize the Windows 7 or Vista Send To MenuFix for New Contact Group Button Not Displaying in VistaWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Make Your Last Minute Holiday Cards with Microsoft Word TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Whitepaper list for the application framework

    - by Rick Finley
    We're reposting the list of technical whitepapers for the Oracle ETPM framework (called OUAF, Oracle Utilities Application Framework).  These are are available from My Oracle Support at the Doc Id's mentioned below. Some have been updated in the last few months to reflect new advice and new features.  This is reposted from the OUAF blog:  http://blogs.oracle.com/theshortenspot/entry/whitepaper_list_as_at_november Doc Id Document Title Contents 559880.1 ConfigLab Design Guidelines This whitepaper outlines how to design and implement a data management solution using the ConfigLab facility. This whitepaper currently only applies to the following products: Oracle Utilities Customer Care And Billing Oracle Enterprise Taxation Management Oracle Enterprise Taxation and Policy Management           560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers. 560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows: Concepts - General Concepts and Performance Troublehooting processes Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions. Network Troubleshooting - General troubleshooting of the network with common issues and resolutions. Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions. Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions. Database Troubleshooting - General troubleshooting of the database with common issues and resolutions. Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions. 560401.1 Software Configuration Management Series  A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. The individual whitepapers are as follows: Concepts - General concepts and introduction. Environment Management - Principles and techniques for creating and managing environments. Version Management - Integration of Version control and version management of configuration items. Release Management - Packaging configuration items into a release. Distribution - Distribution and installation of releases across environments Change Management - Generic change management processes for product implementations. Status Accounting - Status reporting techniques using product facilities. Defect Management - Generic defect management processes for product implementations. Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation. Implementing Service Packs - Discussion on the service packs and how to use them in an implementation. Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades. 773473.1 Oracle Utilities Application Framework Security Overview A whitepaper summarizing the security facilities in the framework. Now includes references to other Oracle security products supported. 774783.1 LDAP Integration for Oracle Utilities Application Framework based products Updated! A generic whitepaper summarizing how to integrate an external LDAP based security repository with the framework. 789060.1 Oracle Utilities Application Framework Integration Overview A whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products A whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices for Oracle Utilities Application Framework based products This whitepaper outlines the common and best practices implemented by sites all over the world. 856854.1 Technical Best Practices V1 Addendum Addendum to Technical Best Practices for Oracle Utilities Customer Care And Billing V1.x only. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Oracle Identity Manager used to provision user and user group security information. For Fw4.x customers use whitepaper 1375600.1 instead. 1068958.1 Production Environment Configuration Guidelines A whitepaper outlining common production level settings for the products based upon benchmarks and customer feedback. 1177265.1 What's New In Oracle Utilities Application Framework V4? Whitepaper outlining the major changes to the framework since Oracle Utilities Application Framework V2.2. 1290700.1 Database Vault Integration Whitepaper outlining the Database Vault Integration solution provided with Oracle Utilities Application Framework V4.1.0 and above. 1299732.1 BI Publisher Guidelines for Oracle Utilities Application Framework Whitepaper outlining the interface between BI Publisher and the Oracle Utilities Application Framework 1308161.1 Oracle SOA Suite Integration with Oracle Utilities Application Framework based products This whitepaper outlines common design patterns and guidelines for using Oracle SOA Suite with Oracle Utilities Application Framework based products. 1308165.1 MPL Best Practices Oracle Utilities Application Framework This is a guidelines whitepaper for products shipping with the Multi-Purpose Listener. This whitepaper currently only applies to the following products: Oracle Utilities Customer Care And Billing Oracle Enterprise Taxation Management Oracle Enterprise Taxation and Policy Management 1308181.1 Oracle WebLogic JMS Integration with the Oracle Utilities Application Framework This whitepaper covers the native integration between Oracle WebLogic JMS with Oracle Utilities Application Framework using the new Message Driven Bean functionality and real time JMS adapters. 1334558.1 Oracle WebLogic Clustering for Oracle Utilities Application Framework New! This whitepaper covers process for implementing clustering using Oracle WebLogic for Oracle Utilities Application Framework based products. 1359369.1 IBM WebSphere Clustering for Oracle Utilities Application Framework New! This whitepaper covers process for implementing clustering using IBM WebSphere for Oracle Utilities Application Framework based products 1375600.1 Oracle Identity Management Suite Integration with the Oracle Utilities Application Framework New! This whitepaper covers the integration between Oracle Utilities Application Framework and Oracle Identity Management Suite components such as Oracle Identity Manager, Oracle Access Manager, Oracle Adaptive Access Manager, Oracle Internet Directory and Oracle Virtual Directory. 1375615.1 Advanced Security for the Oracle Utilities Application Framework New! This whitepaper covers common security requirements and how to meet those requirements using Oracle Utilities Application Framework native security facilities, security provided with the J2EE Web Application and/or facilities available in Oracle Identity Management Suite.

    Read the article

  • SQL SERVER – Online Index Rebuilding Index Improvement in SQL Server 2012

    - by pinaldave
    Have you ever faced situation when you see something working and you feel it should not be working? Well, I had similar moments few days ago. I know that SQL Server 2008 supports online indexing. However, I also know that I cannot rebuild index ONLINE if I have used VARCHAR(MAX), NVARCHAR(MAX) or few other data types. While I held my belief very strongly I came across situation, where I had to go online and do little bit reading from Book Online. Here is the similar example. First of all – run following code in SQL Server 2008 or SQL Server 2008 R2. USE TempDB GO CREATE TABLE TestTable (ID INT, FirstCol NVARCHAR(10), SecondCol NVARCHAR(MAX)) GO CREATE CLUSTERED INDEX [IX_TestTable] ON TestTable (ID) GO CREATE NONCLUSTERED INDEX [IX_TestTable_Cols] ON TestTable (FirstCol) INCLUDE (SecondCol) GO USE [tempdb] GO ALTER INDEX [IX_TestTable_Cols] ON [dbo].[TestTable] REBUILD WITH (ONLINE = ON) GO DROP TABLE TestTable GO Now run the same code in SQL Server 2012 version. Observe the difference between both of the execution. You will be get following resultset. In SQL Server 2008/R2 it will throw following error: Msg 2725, Level 16, State 2, Line 1 An online operation cannot be performed for index ‘IX_TestTable_Cols’ because the index contains column ‘SecondCol’ of data type text, ntext, image, varchar(max), nvarchar(max), varbinary(max), xml, or large CLR type. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline. In SQL Server 2012 it will run successfully and will not throw any error. Command(s) completed successfully. I always thought it will throw an error if there is VARCHAR(MAX) or NVARCHAR(MAX) used in table schema definition. When I saw this result it was clear to me that it will be for sure not bug enhancement in SQL Server 2012. For matter for the fact, I always wanted this feature to be added in SQL Server Engine as this will enable ONLINE Index Rebuilding for mission critical tables which needs to be always online. I quickly searched online and landed on Jacob Sebastian’s blog where he has blogged about it as well. Well, is there any other new feature in SQL Server 2012 which gave you good surprise? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How-to filter table filter input to only allow numeric input

    - by frank.nimphius
    In a previous ADF Code Corner post, I explained how to change the table filter behavior by intercepting the query condition in a query filter. See sample #30 at http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html In this OTN Harvest post I explain how to prevent users from providing invalid character entries as table filter criteria to avoid problems upon re-querying the table. In the example shown next, only numeric values are allowed for a table column filter. To create a table that allows data filtering, drag a View Object – or a data collection of a Web Service or JPA business service – from the DataControls panel and drop it as a table. Choose the Enable Filtering option in the Edit Table Columns dialog so the table renders with the column filter boxes displayed. The table filter fields are created using implicit af:inputText components that need to be customized for you to apply a custom filter input component, or to change the input behavior. To change the input filter, so only a defined set of input keys is allowed, you need to change the default filter field with your own af:inputText field to which you apply an af:clientListener tag that filters user keyboard entries. For this, in the Oracle JDeveloper visual editor, select the column which filter you want to change and expand the column node in the Oracle JDeveloper Structure Window. Part of the column definition is the Column facet node. Expand the facets so you see the filter facet entry. The filter facet is grayed out as there is no custom facet defined. In a next step, open theComponent Palette (ctrl+shift+P) and drag an Input Text component onto the facet. This demarks the first part in the filter customization. To make the custom filter component work, you need to map the af:inputText component value property to the ADF filter criteria that is exposed in the Expression Builder. Open the Expression Builder for the filter input component value property by clicking the arrow icon to its right. In the Expression Builder expand the JSP Objects | vs | filterCriteria node to select the attribute name represented by the table column. The vs entry is the name of a variable that is defined on the table and that grants you access to the table attributes. Now that the filter works as before – though using a custom filter input component – you can add the af:clientListener tag to your custom filter component – af:inputText – to call out to JavaScript when users type in the column filter field Point the client filter method property to a JavaScript function that you reference or add through using the af:resource tag and set the type property value to keyDown. <af:document id="d1">     <af:resource type="javascript" source="/js/filterHandler.js"/> … The filter definition looks as shown below <af:inputText label="Label 1" id="it1"                         value="#{vs.filterCriteria.Employe        <af:clientListener method="suppressCharacterInput"                                     type="keyDown"/> </af:inputText> The JavaScript code that you can use to either filter character inputs or numeric inputs is shown below. Just store this code in an external JavaScript (.js) file and reference it from the af:resource tag. //Allow numbers, cursor control keys and delete keys function suppressCharacterInput(evt) {     var _keyCode = evt.getKeyCode();     var _filterField = evt.getCurrentTarget();     var _oldValue = _filterField.getValue();     if (!((_keyCode < 57) ||(_keyCode > 96 && _keyCode < 105))) {         _filterField.setValue(_oldValue);         evt.cancel();     } } //Allow characters, cursor control keys and delete keys function suppressNumericInput(evt) {  var _keyCode = evt.getKeyCode();  var _filterField = evt.getCurrentTarget();  var _oldValue = _filterField.getValue();  //check for numbers  if ((_keyCode < 57 && _keyCode > 47) ||      (_keyCode > 96 && _keyCode < 105)){     _filterField.setValue(_oldValue);     evt.cancel();   } } But what if browsers don't allow JavaScript ? Don't worry about this. If browsers would not support JavaScript then ADF Faces as a whole would not work and you had a different problem.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 20 (sys.dm_tran_locks)

    - by Tamarick Hill
    The sys.dm_tran_locks DMV is used to return active lock resources on your server. Locking is a mechanism used by SQL Server to protect the integrity of data when you have multiple users that may potentially access the same data at the same time. Let’s run a query against this DMV so we can analyze the results. SELECT * FROM sys.dm_tran_locks As we can see, its a lot of lock information returned from this DMV. I will not go into detail about each of the columns returned, but I will touch on the ones that I feel are the most important. The first column in the output is the resource_type column which tells you the type of lock a particular row represents. It could be a PAGE lock, RID, OBJECT, DATABASE, or several other lock types. The resource_database_id represents the id of the database for a particular lock resource. The resource_lock_partition column represents the ID of a lock partition. When you have a table that is partitioned, locks can be escalated to the partition level before going to a table level lock. The request_mode column gives us information about the type of lock that is being requested. From the screenshots above we see RangeS-S locks which represent a share range lock and IS locks which represent Intent Shared locks. The request_status column displays whether the lock has been granted or whether the lock is waiting to be acquired. The request_session_id  shows the session_id that is requesting the lock. This DMV is the best place to go when you need to identify the exact locks that are being held or pending for individual requests. You might need this information when you are troubleshooting severe blocking or deadlocking problems on your server. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190345.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • SSH error: Permission denied, please try again

    - by Kamal
    I am new to ubuntu. Hence please forgive me if the question is too simple. I have a ubuntu server setup using amazon ec2 instance. I need to connect my desktop (which is also a ubuntu machine) to the ubuntu server using SSH. I have installed open-ssh in ubuntu server. I need all systems of my network to connect the ubuntu server using SSH (no need to connect through pem or pub keys). Hence opened SSH port 22 for my static IP in security groups (AWS). My SSHD-CONFIG file is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Through webmin (Command shell), I have created a new user named 'senthil' and added this new user to 'sudo' group. sudo adduser -y senthil sudo adduser senthil sudo I tried to login using this new user 'senthil' in 'webmin'. I was able to login successfully. When I tried to connect ubuntu server from my terminal through SSH, ssh senthil@SERVER_IP It asked me to enter password. After the password entry, it displayed: Permission denied, please try again. On some research I realized that, I need to monitor my server's auth log for this. I got the following error in my auth log (/var/log/auth.log) Jul 2 09:38:07 ip-192-xx-xx-xxx sshd[3037]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=MY_CLIENT_IP user=senthil Jul 2 09:38:09 ip-192-xx-xx-xxx sshd[3037]: Failed password for senthil from MY_CLIENT_IP port 39116 ssh2 When I tried to debug using: ssh -v senthil@SERVER_IP OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to SERVER_IP [SERVER_IP] port 22. debug1: Connection established. debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA {SERVER_HOST_KEY} debug1: Host 'SERVER_IP' is known and matches the ECDSA host key. debug1: Found key in {MY-WORKSPACE}/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: password debug1: Next authentication method: password senthil@SERVER_IP's password: debug1: Authentications that can continue: password Permission denied, please try again. senthil@SERVER_IP's password: For password, I have entered the same value which I normally use for 'ubuntu' user. Can anyone please guide me where the issue is and suggest some solution for this issue?

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • Extending QuickBooks Reporting with the QuickBooks ADO.NET Data Provider

    - by dataintegration
    The ADO.NET Provider for QuickBooks comes with several reports you may request from QuickBooks by default. However, there are many more that are not readily available. The ADO.NET Provider for QuickBooks makes it easy for you to create new reports and customize existing ones. In this article, we will illustrate how to create your own report and retrieve it from the Server Explorer in Visual Studio. For this example we will show how to create an Item Profitability Report. Creating the report script file Step 1: Download the sample reports available here. Extract them to a folder of your choice. Step 2: Make a copy of the ReportGeneralSummary.rsd file and rename it to ItemProfitability.rsd. Then open the file in any text editor. Step 3: Open the installation directory of the ADO.NET Provider for QuickBooks. Under the \db\ folder, locate the ReportJob.rsb file. Open this file in another text editor. Note: Although we are using ReportJob.rsb for this example, other reports may be contained in other Report*.rsb files. We recommend consulting the included help file and first locating the Report stored procedure and ReportType you are looking for. Otherwise, you may open each Report*.rsb file and look under the "reporttype" input for the report you are attempting to create. Step 4: First, let's rename the title of ItemProfitability.rsd. Near the top of the file you will see a title and description. Change the title to match the name of the file. Change the description to anything you like. For example: <rsb:info title="ItemProfitability" description="Executes my custom report."> Just below the Title, there are a number of columns. The Id represents the row number. The RowType represents the type of data returned by QuickBooks. The ColumnValue* columns represent all of the column data returned by QuickBooks. In some instances, we may need to add additional ColumnValue columns. Step 5: To add additional ColumnValue columns, simply copy the last column, paste it directly below, and continue increasing the numerical value at end of the attribute name. For example: <attr name="ColumnValue9" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue10" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue11" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue12" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> ... Caution: Do not rename the ColumnValue* definitions themselves. They are generalized so that we can understand each type of report returned by QuickBooks. Renaming them to something other than ColumnValue* will cause your columns to return with null values. Step 6: Now let's update the available inputs for the table. From the ReportJob.rsb file, copy all of the input elements into ItemProfitability under the "Psuedo-Column definitions" comment. You will be replacing the existing input elements in ItemProfitability with inputs from ReportJob. When you are done, it should look like this: <!-- Psuedo-Column definitions --> <input name="reporttype" description="The type of the report." value="ITEMESTIMATESVSACTUALS,ITEMPROFITABILITY,JOBESTIMATESVSACTUALSDETAIL,JOBESTIMATESVSACTUALSSUMMARY,JOBPROFITABILITYDETAIL,JOBPROFITABILITYSUMMARY," default="ITEMESTIMATESVSACTUALS" /> <input name="reportperiod" description="Report date range in the format (fromdate:todate), and either value may be omitted for an open ended range (e.g. 2009-12-25:). Supported date format: yyyy-MM-dd." /> <input name="reportdaterangemacro" description="Use a predefined date range." value="ALL,TODAY,THISWEEK,THISWEEKTODATE,THISMONTH,THISMONTHTODATE,THISQUARTER,THISQUARTERTODATE,THISYEAR,THISYEARTODATE,YESTERDAY,LASTWEEK,LASTWEEKTODATE,LASTMONTH,LASTMONTHTODATE,LASTQUARTER,LASTQUARTERTODATE,LASTYEAR,LASTYEARTODATE,NEXTWEEK,NEXTFOURWEEKS,NEXTMONTH,NEXTQUARTER,NEXTYEAR," default="ALL" /> ... Step 7: Now let's update the operationname attribute. This needs to match the same operationname used by ReportJob. After you have copied the correct value from ReportJob.rsb, the operationname in ItemProfitability should look like so: <rsb:set attr="operationname" value="qbReportJob"/> Step 8: There is one more thing we can do to make this a true Item Profitability report. We can remove the reporttype input and hardcode the value. To do this, copy and paste the rsb:set used for operationname. Then rename the attr and value to match the name and value you want to use. For example: <rsb:set attr="operationname" value="qbReportJob"/> <rsb:set attr="reporttype" value="ITEMPROFITABILITY"/> After this you can remove the input for reporttype. Now that you have your own report file, we can move on to displaying the report in the Visual Studio server explorer. Accessing the report through the Data Provider Step 1: Open Visual Studio. In the Server Explorer, configure a new connection with the QuickBooks Data Provider. Step 2: For the Location connection string property, enter the directory where the new report has been saved to. Step 3: The new report should appear as a new view in the Server Explorer. Let's retrieve data from it. Step 4: You can specify any inputs in the WHERE clause. New Report Example Script To help you get started using this new QuickBooks Data Provider report, you will need to download the QuickBooks ADO.NET Data Provider and the fully functional sample script.

    Read the article

  • Ruby on Rails - Belongs_to and Active Admin not creating foreign ID

    - by Milo
    I have the following setup: class Category < ActiveRecord::Base has_many :products end class Product < ActiveRecord::Base belongs_to :category has_attached_file :photo, :styles => { :medium => "300x300>", :thumb => "200x200>" } validates_attachment_content_type :photo, :content_type => /\Aimage\/.*\Z/ end ActiveAdmin.register Product do permit_params :title, :price, :slideshow, :photo, :category form do |f| f.inputs "Product Details" do f.input :title f.input :category f.input :price f.input :slideshow f.input :photo, :required => false, :as => :file end f.actions end show do |ad| attributes_table do row :title row :category row :photo do image_tag(ad.photo.url(:medium)) end end end index do column :id column :title column :category column :price do |product| number_to_currency product.price end actions end class ProductController < ApplicationController def create @product = Product.create(params[:id]) end end Every time I make a product item in activeadmin the category comes up empty. It wont populate the column category_id for the product. It just leaves it empty. What am I doing wrong?

    Read the article

  • Excel formula: can MATCH recognise 'n'&"01", or 'n'&"02 "

    - by Mike
    I have an Excel sheet (source) that has simple ID numbers in column A (01 to 40000). In another sheet (child) I have these same ID numbers in column A but with either an additional 01 or 02 added on; e.g. 0101 or 0102, 250001 or 250002, etc. Therefore this list of ID numbers is nearly twice as long. In column B there are figures. I'm trying to extract the data from column B in the child sheet, and based on whether it has a "01" or a "02" place the figure into either column B or C of the source sheet. My idea is to use INDEX/MATCH, but I'm not sure how the match would be written to take into account the NOT EXACT MATCH of the lookup value. MATCH(A1&"01",child!A1:A100000,). Any tips and links greatly appreciated. Mike.

    Read the article

  • Excel scatter chart with multiple date ranges

    - by Abiel
    I have multiple blocks of time series data on an Excel sheet, with each block having its own set of dates. For example, I might have dates in column A, values in column B, and then dates in column D and values in column E. The values in B go with the dates in A, and the values in E go with the dates in D. The dates in A and D may not be the same. I would like to create a scatter chart with a time category axis that is the union of my two input date ranges in columns A and D. If I select all the data and then go insert chart (in Excel 2010), Excel treats only column A as the X axis, and looks at D as just another set of values. I can get Excel to do what I want by first just charting columns A and B, then selecting D and E and copy-pasting onto the chart. However, I would like to avoid this two-step procedure if possible.

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 6: Chaining multiple Token Services

    - by Your DisplayName here!
    See the previous posts first. So far we looked at the (simpler) scenario where a client acquires a token from an identity provider and uses that for authentication against a relying party WCF service. Another common scenario is, that the client first requests a token from an identity provider, and then uses this token to request a new token from a Resource STS or a partner’s federation gateway. This sounds complicated, but is actually very easy to achieve using WIF’s WS-Trust client support. The sequence is like this: Request a token from an identity provider. You use some “bootstrap” credential for that like Windows integrated, UserName or a client certificate. The realm used for this request is the identifier of the Resource STS/federation gateway. Use the resulting token to request a new token from the Resource STS/federation gateway. The realm for this request would be the ultimate service you want to talk to. Use this resulting token to authenticate against the ultimate service. Step 1 is very much the same as the code I have shown in the last post. In the following snippet, I use a client certificate to get a token from my STS: private static SecurityToken GetIdPToken() {     var factory = new WSTrustChannelFactory(         new CertificateWSTrustBinding(SecurityMode.TransportWithMessageCredential,         idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       factory.Credentials.ClientCertificate.SetCertificate(         StoreLocation.CurrentUser,         StoreName.My,         X509FindType.FindBySubjectDistinguishedName,         "CN=Client");       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(rstsRealm),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } To use a token to request another token is slightly different. First the IssuedTokenWSTrustBinding is used and second the channel factory extension methods are used to send the identity provider token to the Resource STS: private static SecurityToken GetRSTSToken(SecurityToken idpToken) {     var binding = new IssuedTokenWSTrustBinding();     binding.SecurityMode = SecurityMode.TransportWithMessageCredential;       var factory = new WSTrustChannelFactory(         binding,         rstsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     factory.Credentials.SupportInteractive = false;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcRealm),         KeyType = KeyTypes.Symmetric     };       factory.ConfigureChannelFactory();     var channel = factory.CreateChannelWithIssuedToken(idpToken);     return channel.Issue(rst); } For this particular case I chose an ADFS endpoint for issued token authentication (see part 1 for more background). Calling the service now works exactly like I described in my last post. You may now wonder if the same thing can be also achieved using configuration only – absolutely. But there are some gotchas. First of all the configuration files becomes quite complex. As we discussed in part 4, the bindings must be nested for WCF to unwind the token call-stack. But in this case svcutil cannot resolve the first hop since it cannot use metadata to inspect the identity provider. This binding must be supplied manually. The other issue is around the value for the realm/appliesTo when requesting a token for the R-STS. Using the manual approach you have full control over that parameter and you can simply use the R-STS issuer URI. Using the configuration approach, the exact address of the R-STS endpoint will be used. This means that you may have to register multiple R-STS endpoints in the identity provider. Another issue you will run into is, that ADFS does only accepts its configured issuer URI as a known realm by default. You’d have to manually add more audience URIs for the specific endpoints using the ADFS Powershell commandlets. I prefer the “manual” approach. That’s it. Hope this is useful information.

    Read the article

  • How can I compare two columns in Excel to highlight words that don't match?

    - by Jez Vander Brown
    (I'm using Microsoft excel 2010) OK, lets say I have a list of phrases in both column A and column B (see screen shot below) What I would like to happen whether it be with a macro, VBA or formula is: If there is a word in any cell in column A that isn't any of the words in any cell in column B to highlight that word in red. For example: in cell A9 the word "buy" is there, but the word buy isn't mentioned anywhere in column B so i would like the word buy to highlight in red. How can I accomplish this? (I think a macro/vba would be the best option but I have no idea how to create it, or even if its possible.)

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >