Search Results

Search found 59326 results on 2374 pages for 'full text search'.

Page 49/2374 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • How can I replace/speed up a text search query which uses LIKE ?

    - by Jules
    I'm trying to speed up my query... select PadID from Pads WHERE (keywords like '%$search%' or ProgramName like '%$search%' or English45 like '%$search%') AND RemovemeDate = '2001-01-01 00:00:00' ORDER BY VersionAddDate DESC I've done some work already, I have a keywords table so I can add ... PadID IN (SELECT PadID FROM Keywords WHERE word = '$search') ... However its going to be a nightmare to split up the words from English45 and ProgramName into a word table. Any ideas ? EDIT : (also provided actual table names)

    Read the article

  • SQL SERVER – FT_IFTS_SCHEDULER_IDLE_WAIT – Full Text – Wait Type – Day 13 of 28

    - by pinaldave
    In the last few days during this series, I got many question about this Wait type. It would be great if you read my original related wait stats query in the first post because I have filtered it out in WHERE clause. However, I still get questions about this being one of the most wait types they encounter. The truth is, this is a background task processing and it really does not matter and it should be filtered out. There are many new Wait types related to Full Text Search that are introduced in SQL Server 2008. If you run the following query, you will be able to find them in the list. Currently there is not enough information for all of them available on BOL or any other place. But don’t worry; I will write an in-depth article when I learn more about them. SELECT * FROM sys.dm_os_wait_stats WHERE wait_type LIKE 'FT_%' The result set will contain following rows. FT_RESTART_CRAWL FT_METADATA_MUTEX FT_IFTSHC_MUTEX FT_IFTSISM_MUTEX FT_IFTS_RWLOCK FT_COMPROWSET_RWLOCK FT_MASTER_MERGE FT_IFTS_SCHEDULER_IDLE_WAIT We have understood so far that there is not much information available. But the problem is when you have this Wait type, what should you do?  The answer is to filter them out for the moment (i.e, do not pay attention on them) and focus on other pressing issues in wait stats or performance tuning. Here are two of my informal suggestions, which are totally independent from wait stats: Turn off the Full Text Search service in your system if you are  not necessarily using it on your server. Learn proper Full Text Search methodology. You can get Michael Coles’ book: Pro Full-Text Search in SQL Server 2008. Now I invite you to speak out your suggestions or any input regarding Full Text-related best practices and wait stats issue. Please leave a comment. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Transaction Log Full – Transaction Log Larger than Data File – Notes from Fields #001

    - by Pinal Dave
    I am very excited to announce a new series on this blog – Notes from Fields. I have been blogging for almost 7 years on this blog and it has been a wonderful experience. Though, I have extensive experience with SQL and Databases, it is always a good idea that we consult experts for their advice and opinion. Following the same thought process, I have started this new series of Notes from Fields. In this series we will have notes from various experts in the database world. My friends at Linchpin People have graciously decided to support me in my new initiation.  Linchpin People are database coaches and wellness experts for a data driven world. In this very first episode of the Notes from Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces in their career, when database logs fills up your hard-drive or your database log is larger than your data file. Read the experience of Tim in his own words. As a consultant, I encounter a number of common issues with clients.  One of the more common things I encounter is finding a user database in the FULL recovery model that does not make a regular transaction log backups or ever had a transaction log backup. When I find this, usually the transaction log is several times larger than the data file. Finding this issue is very significant to me in that it allows to me to discuss service level agreements with the client. I get to ask questions such as, are nightly full backups sufficient or do they need point in time recovery.  This conversation has now signed with the customer and gets them to thinking about their disaster recovery and high availability solutions. This issue is also very prominent on SQL Server forums and usually has the title of “Help, my transaction log has filled up my disk” or “Help, my transaction log is many times the size of my database”. In cases where the client only needs the previous full nights backup, I am able to change the recovery model to SIMPLE and shrink the transaction log using DBCC SHRINKFILE (2,1) or by specifying the transaction log file name by using DBCC SHRINKFILE (file_name, target_size). When the client needs point in time recovery then in most cases I will still end up switching the client to the SIMPLE recovery model to truncate the transaction log followed by a full backup. I will then schedule a SQL Agent job to make the regular transaction log backups with an interval determined by the client to meet their service level agreements. It should also be noted that typically when I find an overgrown transaction log the virtual log file count is also out of control. I clean up will always take that into account as well.  That is a subject for a future blog post. If your SQL Server is facing any issue we can Fix Your SQL Server. Additional reading: Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace)  SQL SERVER – How to Stop Growing Log File Too Big Shrinking Truncate Log File – Log Full Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata

    Read the article

  • What are the Search engine affects of registering the same domain on multiple top level domains (ie. .com, .ie, .nl etc.)?

    - by user1020317
    I'm looking to register a few more domains for my company, I have my-company.com at the moment, but now require my-company.com.au and my-company.nl and some others.. I'm running through my options and wondering what is the best.. Duplicate all the content on the .com package and make a replica at the other domains Buy the other domains but do a 301 redirect back to the .com domain. Create a full new website with different content for the new domains, thus having no text duplication We currently sell over the world so would like to raise our Search rankings in various countries, can this be done by buying the domain in the country, and if so, how will the above methods affect our search rankings. Any other suggestions are welcome!

    Read the article

  • how to retrieve img alt text with jquery or javascript? [on hold]

    - by kate
    Which is the code with which we can retreive alternative text of image: It is a Cataloge with clothes. Dressers, Shirts, Skirts e.t.c. in front page of a site. The featured images of the categories can be changed manually from someone. I did a check and it is asking me to give alt text. I did it to some images with alt="". But to the cataloge I cannot do it. the code is below: {{ 'option_selection.js' | shopify_asset_url | script_tag }} {{ 'api.jquery.js' | shopify_asset_url | script_tag }} {% if template contains 'customers' %} {{ 'shopify_common.js' | shopify_asset_url | script_tag }} {{ 'customer_area.js' | shopify_asset_url | script_tag }} {% endif %} {% if settings.display_slideshow %}{{ 'jquery.slider.js' | asset_url | script_tag }}{% endif %} {% if settings.include_masonry %}{{ 'jquery.masonry.js' | asset_url | script_tag }}{% endif %} {% if settings.enable_product_image_zoom %}{{ 'jquery.zoom.js' | asset_url | script_tag }}{% endif %} {{ 'fancy.js' | asset_url | script_tag }} {{ 'shop.js' | asset_url | script_tag }} Shopify.money_format = '{{ shop.money_format }}'; {% if template contains "product" %} jQuery(document).ready(function($){ {% if product.variants.size 1 or product.options.size 1 %} new Shopify.OptionSelectors("product-select", { product: {{ product | json }}, onVariantSelected: selectCallback }); {% assign found_one_in_stock = false %} {% for variant in product.variants %} {% if variant.available and found_one_in_stock == false %} {% assign found_one_in_stock = true %} {% for option in product.options %} $('#product-select-option-' + {{ forloop.index0 }}).val({{ variant.options[forloop.index0] | json }}).trigger('change'); {% endfor %} {% endif %} {% endfor %} {% endif %} }); $(function() { $( "#tabs" ).tabs(); });

    Read the article

  • True column-mode (block-selection and editing) text editor solution?

    - by tamale
    In windows, I used to use a text editor called crimson editor which featured the best column-mode editing support I have yet to use. When enabled via a simple Alt-C shortcut, selections could be made with the mouse or cursor keys and they would be visual blocks rather than wrapped-lines. These selections could be deleted, moved, copied, pasted, and all of the operations just made sense. You could also just start typing, and you'd get a column of the characters as you're typing. There are multiple ways of getting parts of the these features working separately discussed on this forum thread, but no one has yet to provide a solution that provides this all-encompassing and easy-to-use method. If someone could point me to a gedit plugin where this work is actively being pursued, perhaps I could help with the coding myself. If someone is aware of a text editor that already provides this full functionality, I'd appreciate the info. Running crimson editor through wine and the close-but-not-quite multi-edit plugin for gedit are the temporary solutions I'm 'getting by with' for the time being.

    Read the article

  • Text mining on large database (data mining)

    - by yox
    Hello, I have a large database of resumes (CV), and a certain table skills grouping all users skills. inside that table there's a field skill_text that describes the skill in full text. I'm looking for an algorithm/software/method to extract significant terms/phrases from that table in order to build a new table with standarized skills.. Here are some examples skills extracted from the DB : Sectoral and competitive analysis Business Development (incl. in international settings) Specific structure and road design software - Microstation, Macao, AutoCAD (basic knowledge) Creative work (Photoshop, In-Design, Illustrator) checking and reporting back on campaign progress organising and attending events and exhibitions Development : Aptana Studio, PHP, HTML, CSS, JavaScript, SQL, AJAX Discipline: One to one marketing, E-marketing (SEO & SEA, display, emailing, affiliate program) Mix marketing, Viral Marketing, Social network marketing. The output shoud be something like : Sectoral and competitive analysis Business Development Specific structure and road design software - Macao AutoCAD Photoshop In-Design Illustrator organising events Development Aptana Studio PHP HTML CSS JavaScript SQL AJAX Mix marketing Viral Marketing Social network marketing emailing SEO One to one marketing As you see only skills remains no other representation text. I know this is possible using text mining technics but how to do it ? the database is realy large.. it's a good thing because we can calculate text frequency and decide if it's a real skill or just meaningless text... The big problem is .. how to determin that "blablabla" is a skill ? thanks

    Read the article

  • Lotus Notes rich text field to RTF File - VB

    - by user236105
    Here is my problem, I am doing a data migration from Lotus notes to another type of software that does not support Rich Text Fields. I am trying to write a VB 2005 program that will take any rich text fields that are found and place them into an RTF file - which will be uploaded as an attachment in the new software. I cannot get the program to take the rich text formating or objects to the RTF file, only the plain text. I have tried everything under the sun using the COM library to get these objects out to no avail. Any ideas or suggestions? Thank you in advance Bryan

    Read the article

  • HTML5 text wrap

    - by Gwood
    I am trying to add text on an image using the html5 canvas. First the image is drawn and on the image the text is drawn. So far so good. But where i am facing prob is that if the text is too long it gets cut off in the start and end by the canvas. I dont hav eplan to resize the canvas but I was wondering how to wrap the long text into multiple lines so that all of it gets displayed. Can anyone point me at the right direction?

    Read the article

  • Unable to get simple ruby on rails Search to work :/

    - by edu222
    I am new to RoR, any help would be greatly appreciated :) I have a basic scaffolding CRUD app to add customers. I am trying to search by first_name or last_name fields. The error that I am getting is: NoMethodError in Clientes#find You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.each Extracted source (around line #9): 6: <th>Apellido</th> 7: </tr> 8: 9: <% for cliente in @clientes %> 10: <tr> 11: <td><%=h cliente.client_name %></td> 12: <td><%=h cliente.client_lastname %></td> Application Trace C:/Rails/clientes/app/views/clientes/find.html.erb:9:in `_run_erb_app47views47clientes47find46html46erb' My find function in controllers/clientes_controlee.rb is: # Find def find @cliente = Cliente.find(:all, :conditions=>["client_name = ? OR client_lastname = ?", params[:search_string], params[:search_string]]) end My views/layouts clientes.html.erb form code fragment is: <span style="text-align: right"> <% form_tag "/clientes/find" do %> <%= text_field_tag :search_string %> <%= submit_tag "Search" %> <% end %> </span> The search template I created in views/clientes/find.html.erb: <h1>Listing clientes for <%= params[:search_string] %></h1> <table> <tr> <th>Nombre</th> <th>Apellido</th> </tr> <% for cliente in @clientes %> <tr> <td><%=h cliente.client_name %></td> <td><%=h cliente.client_lastname %></td> <td><%= link_to 'Mostrar', cliente %></td> <td><%= link_to 'Editar', edit_cliente_path(cliente) %></td> <td><%= link_to 'Eliminar', cliente, :confirm =>'Estas Seguro de que desear eliminar a este te cliente?', :method => :delete %></td> </tr> <% end %> </table> <%= link_to 'Atras', clientes_path %

    Read the article

  • Getting started with character and text processing (encoding, regular expressions)

    - by TK
    I'd like to learn foundations of encodings, characters and text. Understanding these is important for dealing with a large set of text whether that are log files or text source for building algorithms for collective intelligence. My current knowledge is pretty basic: something like "As long as I use UTF-8, I'm okay." I don't say I need to learn about advanced topics right away. But I need to know: Bit and bytes level knowledge of encodings. Characters and alphabets not used in English. Multi-byte encodings. (I understand some Chinese and Japanese. And parsing them is important.) Regular expressions. Algorithm for text processing. Parsing natural languages. I also need an understanding of mathematics and corpus linguistics. The current and future web (semantic, intelligent, real-time web) needs processing, parsing and analyzing large text. I'm looking for some resources (maybe books?) that get me started with some of the bullets. (I find many helpful discussion on regular expressions here on Stack Overflow. So, you don't need to suggest resources on that topic.)

    Read the article

  • Extracting pure content / text from HTML Pages by excluding navigation and chrome content

    - by Ankur Gupta
    Hi, I am crawling news websites and want to extract News Title, News Abstract (First Paragraph), etc I plugged into the webkit parser code to easily navigate webpage as a tree. To eliminate navigation and other non news content I take the text version of the article (minus the html tags, webkit provides api for the same). Then I run the diff algorithm comparing various article's text from same website this results in similar text being eliminated. This gives me content minus the common navigation content etc. Despite the above approach I am still getting quite some junk in my final text. This results in incorrect News Abstract being extracted. The error rate is 5 in 10 article i.e. 50%. Error as in Can you Suggest an alternative strategy for extraction of pure content, Would/Can learning Natural Language rocessing help in extracting correct abstract from these articles ? How would you approach the above problem ?. Are these any research papers on the same ?. Regards Ankur Gupta

    Read the article

  • Sublime Text LaTeXTools console autohide

    - by DCh
    The build script in the LaTeXTools plugin for Sublime Text editor pops up the console, where the result of the compilation is written. I would like the console to auto-hide once the compilation is finished and there are no errors (and to stay open otherwise). I knew how to achieve this with Sublime Text 2. (I think I inserted two lines sublime.active_window().run_command("show_panel", {"panel": "console", "toggle": True})) somewhere in the build script.) How to achieve this behavior with Sublime Text 3? How to (properly) achieve this behavior with Sublime Text 2?

    Read the article

  • Best way to retrieve certain field of all documents returned by a lucen search

    - by Philipp
    Hi, I was wondering what the best way is to retrieve a certain field of all documents returned by a Searcher of Lucene. Background: each document has a date field (written on) and I would like to show a timeline of all found documents, so I need to extract the date (day) field of all the documents I find with the search. I currently retrieve every document using Searcher.doc(int, FieldSelector) having the selector only retrieve the certain field. I have indexed 250k documents, the search itself takes no time and returns about 10k document ids. Retrieving those however, takes 20+ seconds. What can I do to speed things up, but still get all the values I need. Thx in advance Philipp

    Read the article

  • search results filter effects - flex

    - by Adam
    I've created a search with a couple of comboboxes that allow users to filter their search results. The results are currently using a TileList & itemRenderer to display, and now I'd like to add an animation effect when the user filters their results. I know that you can use the itemsChangeEffect to create an animation effect when the user drags and moves result itmes. So I'd like to know if there's a way to create a similar effect triggered by the filtering on the comboboxes? Thanks.

    Read the article

  • Optimizing encrypted column search

    - by Sung Meister
    I have a table called,tblClient with an encrypted column called SSN. Due to company policy, we encrypted SSN using a symmetric key (chosen over asymmetric key due to performance reasons) using a password. Here is a partial LIKE search on SSN declare @SSN varchar(11) set @SSN = '111-22-%' open symmetric key SSN_KEY decrypt by password = 'secret' select Client_ID from tblClient (nolock) where convert(nvarchar(11), DECRYPTBYKEY(SSN)) like @SSN close symmetric key SSN_KEY Before encryption, searching thru 150,000 records took less than 1 second. but with the mix of decryption, the same search takes around 5 seconds. What strategy can I apply to try to optimize searching thru encrypted column?

    Read the article

  • Lucene search taking TOOO long.

    - by Josh Handel
    I;m using Lucene.net (2.9.2.2) on a (currently) 70Gig index.. I can do a fairly complicated search and get all the document IDs back in 1 ~ 2 seconds.. But to actually load up all the hits (about 700 thousand in my test queries) takes 5+ minutes. We aren't using lucene for UI, this is a datastore between processes where we have hundreds of millions of pre-cached data elements, and the part I am working on exports a few specific fields from each found document. (ergo, pagination doesn't make since as this is an export between processes). My question is what is the best way to get all of the documents in a search result? currently I am using a custom collector that does a get on the document (with a MapFieldSelector) as its collecting.. I've also tried iterating through the list after the collector has finished.. but that was even worse. I'm open to ideas :-). Thanks in advance.

    Read the article

  • Eclipse Search Only Specific Folders

    - by Craig
    Hello, I already saw the answers for a question almost identical to this: http://stackoverflow.com/questions/443169/eclipse-exclude-folders-from-search. However I am looking for a resolution that would allow me to say look at 10 of my 200 folders and those 10 change all the time. Is there a way I can search through just 1 folder and avoid any other folders that are not inside it? I don't want to create a different project as I am using SVN and I have had cases with mxml files and other files that adding a file that we moved from one project to another caused problems for other developers.

    Read the article

  • Generic Text Only printer driver mangles control codes

    - by Terry
    If an escape character (or most other characters < 0x20) is sent to the generic / text only printer it gets printed as a period. Using the code in the WinDDK is it possible to 'correct' this behaviour so that it passes it through unmodified? The general scenario for this is that some application ('user app') outputs a document to a windows printer. My application requires this data in plain text form and so what I do is run a generic / text only printer that talks to a virtual com port. This generally works fine except where the 'user app' outputs binary data to the print queue without using the correct mechanism (which seems to work fine on some printer drivers, such as the Epson POS ones, but not the generic / text only one). I've tried changing the print processor selection without success and also tried looking at the gtt files to see if I could readily map in these characters as though they were printable, but the minidriver tool won't let me do that. Any suggestions?

    Read the article

  • How to make Excel strip ALL quotes from CSV text fields

    - by Klay
    When importing a CSV file into Excel, it only strips the double-quotes from the FIRST field on the line, but leaves them on all other fields. How can I force Excel to strip the quotes from ALL strings? For instance, I have a CSV file: "text1", "text2", "numeric1", "numeric 2" "abc", "def", 123, 456 "abc", "def", 123, 456 "abc", "def", 123, 456 "abc", "def", 123, 456 I import it into Excel using Data Import External Data Import Data. I specify that the fields are delimited by commas, and that the text delimiter is the double-quote character. Both the data preview and the actual Excel spreadsheet columns only strip the double-quotes from the first text field. All other text fields still have quotes around them. What's really strange is that Access is able to import this data correctly (i.e. strips quotes from every text field. Note that this is NOT a matter of internal commas or quotes or escape characters. This happens in Excel 2003 and Excel 2007.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >