Search Results

Search found 1622 results on 65 pages for 'aman deep gautam'.

Page 42/65 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • How to reduce this IF-Else ladder in c#

    - by Rohit
    This is the IF -Else ladder which I have created to focus first visible control on my form.According to the requirement any control can be hidden on the form.So i had to find first visible control and focus it. if (ddlTranscriptionMethod.Visible) { ddlTranscriptionMethod.Focus(); } else if (ddlSpeechRecognition.Visible) { ddlSpeechRecognition.Focus(); } else if (!SliderControl1.SliderDisable) { SliderControl1.Focus(); } else if (ddlESignature.Visible) { ddlESignature.Focus(); } else { if (tblDistributionMethods.Visible) { if (chkViaFax.Visible) { chkViaFax.Focus(); } else if (chkViaInterface.Visible) { chkViaInterface.Focus(); } else if (chkViaPrint.Visible) { chkViaPrint.Focus(); } else { chkViaSelfService.Focus(); } } } Is there any other way of doing this. I thought using LINQ will hog the performance as i have to tranverse the whole page collection. I am deep on page which has masterpages.Please suggest.

    Read the article

  • C#: Basic Reflection Class

    - by Mike
    I'm trying to find a basic reflection abstract class that will generate basic information about a class. I have a template of how I would like it to work: class ThreeList<string,Type,T> { string Name {get; set;} Type Type {get; set;} T Value {get; set;} } abstract class Reflect<T> { List<ThreeList<string, Type, T> list; ReturnType MethodName() { foreach (System.Reflection.PropertyInfo prop in this.GetType().GetProperties()) { object value = prop.GetValue(this, new object[] { }); list.Add(prop.Name, prop.DeclaringType, value); } } } I'd like it to be infinitely deep, recursively calling Reflect. Something like this has to exist. I'm not really opposed to coding it myself, I just don't want to go through the hassle if its already been done.

    Read the article

  • How can I use jQuery for messing with a particular div, but not in the current document - in a varia

    - by bisaram
    How can I use jQuery for messing with a particular div, but not in the current document - in a variable, that contains HTML? The point is that I want to show a preview of a page (a piece of it's content) in a modal window, when the link to this page is clicked. Well, onClick I load this whole HTML into a variable via JSON and then... how would I find a particular div I need in it? It's gonna be almost impossible to parse it with PHP before converting it into JSON and giving back to jQuery processor because of a deep hierarchy. Basically, is it even possible to do smth like $( 'div#some-id' ).blabla(); not for the current document, but for the document, stored in a variable? Thx everyone in advance.

    Read the article

  • do people value information or aesthetic value of websites ? [closed]

    - by fwfwfw
    I'm thinking, why does the web have to be so colorful. meaning, all the information is buried deep beneath layers of flash, javascripts, html and images. Sure, a good positioning of these media files, create an aesthetic value but how important is it to the user ? moreover, aren't people looking for information after all ? why can't the internet be a uniform looking data warehouse ? now we've gotta digg through all the aesthetic junk, using shady web scraping techniques, unless RSS or API is provided. why can't we settle for just a dull grey button and framesets for navigation ? why can't all sites have navigation frame on the left and top ? why can't all sites put their damn data always in normalized table tag ?

    Read the article

  • Programming Exercises for Learning Purposes?

    - by cam
    Are there any programming exercises that apply to any language? Before I got my first job, I thought I knew C# pretty well, then I was thrown right into the deep end, and now I know I have a good command over the language. I would like to apply the same method to other languages, but unfortunately, I'm sort of stuck with C# at work. Ideally, something similar (but broader in scope) to Project Euler is ideal. Project Euler helped me learn a ton of C++/F#, some math, algorithms, handling bignums, etc. I'm looking for something like this.

    Read the article

  • How to temporarily change all default user settings without destroying the original?

    - by mystify
    My app is based strongly on a lot of NSUserDefault keys and values. I want to implement a temporary defaults profile which the user can activate to get a special task done easily. For this, some of the user defaults must be changed temporarily so the app adjusts it's interface appropriately. I started to just manually change those NSUserDefaults settings, but this also destroys the user's original settings. Is it possible to keep a backup of the user's NSUserDefault settings and restore them after the user quits the temporary mode or the app? Like I see it, NSUserDefaults actually is just an NSMutableDictionary which is generated out of a plist file. So I would just make a deep copy of that and later assign that copy somehow back to NSUserDefaults?

    Read the article

  • Building a financial app with Django

    - by mfalcon
    Hi guys, I'm building an app for a small business so I've to work with currencies, decimal numbers, etc... My goal is to create something like pulseapp.com. I've searched for opensource projects to look and the only thing I had found was django-cashflow. This app uses python-money. I've read some of the code and the ways it's coded seems a bit weird to me and it's not fully complete. Is the app worth to take a deep look? Does anyone know about another similar app? Is the task difficult or a begginer like me could find a way to code it himself?

    Read the article

  • Template apps for iPhone

    - by rob
    Is there a good place to get starter apps for iPhone, where you choose from any of a large set of permutations?...for instance with a nav bar and a flip screen and a 3 deep table view, with Core Data support etc. I guess what I was hoping for is some kind of wizard where you can check a few boxes and have a working app as a starting point....but more than just the 3 or 4 choices that come with xCode. If not a wizard, just a nice set of a couple dozen permutations. Also....is there any good sample apps out there that show the difference between identical apps, one which uses Interface Builder and one not? Aside from being handy for myself, I'd think these would be great as a teaching tool. I've googled a bit and come up with nothing.

    Read the article

  • jQuery Toggle Help

    - by Cameron
    I have the following code: $(document).ready(function() { // Manage sidebar category display jQuery("#categories > ul > li.cat-item").each(function(){ var item; if ( jQuery(this).has("ul").length ) { item = jQuery("<span class='plus'>+</span>").click(function(e){ jQuery(this) .text( jQuery(this).text() === "+" ? "-" : "+" ) .parent().next().toggle(); return false; }); jQuery(this).find(".children").hide(); } else { item = jQuery("<span class='plus'>&nbsp;</span>"); } jQuery(this).children("a").prepend( item ); }); }); This creates a sort of toggle system for my categories. But it will only work with 2 levels deep, what I need it to do is work with unlimited levels.

    Read the article

  • Javascript object encapsulation that tracks changes

    - by Raynos
    Is it possible to create an object container where changes can be tracked Said object is a complex nested object of data. (compliant with JSON). The wrapper allows you to get the object, and save changes, without specifically stating what the changes are Does there exist a design pattern for this kind of encapsulation Deep cloning is not an option since I'm trying to write a wrapper like this to avoid doing just that. The solution of serialization should only be considered if there are no other solutions. An example of use would be var foo = state.get(); // change state state.update(); // or state.save(); client.tell(state.recentChange()); A jsfiddle snippet might help : http://jsfiddle.net/Raynos/kzKEp/ It seems like implementing an internal hash to keep track of changes is the best option. [Edit] To clarify this is actaully done on node.js on the server. The only thing that changes is that the solution can be specific to the V8 implementation.

    Read the article

  • Count the number of emails each day in Outlook 2003?

    - by Mat Nadrofsky
    This is for a little pet project of mine. I want to write a program that does some email analytics and tells you the number of emails coming in and out each day, as well as your percentages. Really, all I need to do to kick this off is write a .Net app that can talk with Outlook and count the number of messages received and sent for give dates. Before I got too deep into this, I figured I'd poll the group and see if there is a particular approach I should follow when starting something like this. Any thoughts?

    Read the article

  • How do I manipulate a tree of immutable objects?

    - by Frederik
    I'm building an entire application out of immutable objects so that multi-threading and undo become easier to implement. I'm using the Google Collections Library which provides immutable versions of Map, List, and Set. My application model looks like a tree: Scene is a top-level object that contains a reference to a root Node. Each Node can contain child Nodes and Ports. An object graph might look like this: Scene | +-- Node | +-- Node | +- Port +-- Node | +- Port +- Port If all of these objects are immutable, controlled by a top-level SceneController object: What is the best way to construct this hierarchy? How would I replace an object that is arbitrarily deep in the object tree? Is there a way to support back-links, e.g. a Node having a "parent" attribute?

    Read the article

  • C#: How to unit test a method that relies on another method within the same class?

    - by michael paul
    I have a class similar to the following: public class MyProxy : ClientBase<IService>, IService { public MyProxy(String endpointConfiguration) : base(endpointConfiguration) { } public int DoSomething(int x) { int result = DoSomethingToX(x); //This passes unit testing int result2 = ((IService)this).DoWork(x) //do I have to extract this part into a separate method just //to test it even though it's only a couple of lines? //Do something on result2 int result3 = result2 ... return result3; } int IService.DoWork(int x) { return base.Channel.DoWork(x); } } The problem lies in the fact that when testing I don't know how to mock the result2 item without extracting the part that gets result3 using result2 into a separate method. And, because it is unit testing I don't want to go that deep as to test what result2 comes back as... I'd rather mock the data somehow... like, be able to call the function and replace just that one call.

    Read the article

  • C/C++ function definitions without assembly

    - by Jack
    Hi, I always thought that functions like printf() are in the last step defined using inline assembly. That deep into stdio.h is burried some asm code that actually tells CPU what to do. Something like in dos, first mov bagining of the string to some memory location or register and than call some int. But since x64 version of Visual Studio doesent support inline assembler at all, it made me think that there are really no assembler-defined functions in C/C++. So, please, how is for example printf() defined in C/C++ without using assembler code? What actually executes the right software interrupt? Thanks.

    Read the article

  • Looking for a jquery plugin to serialize a form to an object

    - by John
    I'm looking for a jQuery function or plugin that serializes form inputs to an object using the naming convention for deep-serialization supported by param() in jQuery 1.4: <form> <input name="a[b]" value="1"/> <input name="a[c]" value="2"/> <input name="d[]" value="3"/> <input name="d[]" value="4"/> <input name="d[2][e]" value="5"/> </form> $('form').serializeObject(); // { a: { b:1,c:2 }, d: [3,4,{ e:5 }] } Prototype's Form.serialize method can do exactly this. What's the jQuery equivalent? I found this plugin but it doesn't follow this naming convention.

    Read the article

  • Getting 'choice' to work in Highline Ruby Gem without error and getting variable from it

    - by The Warm Jets
    I'm having a couple of problems using Highline in Ruby, and trying to get the choice element, detailed here, to work. At the moment the following code produces the error "error: wrong number of arguments (0 for 1). Use --trace to view backtrace" How do I get the variable out of choice? At the moment I have the 'do' setup, but I have no idea about how to get the variable the user has chosen out and into a variable for use elsewhere. Sorry if this is a bit beginner, I'm brand new to ruby and this is my first project, in at the deep end. Thanks in advance. if agree("Are these files going to be part of a set? ") set_title = ask("Title: ") set_desc = ask("Description:") set_genre = ask("Genre: ") set_label = ask("Record Label: ") set_date = ask_for_date("Release Date (yy-mm-dd): ") set_label = ask("EAN/UPC: ") set_buy = ask("Buy this set link: ") set_tags = ask_for_array("Tags (seperated by space): ") # Sort out license choose do |menu| menu.prompt = "Please choose the license for this set? " menu.choices(:all_rights_reserved, :cc_by) do # put the stuff in a variable end end end # End setup set

    Read the article

  • How long people take to learn a new programming language?

    - by Cawas
    In general aspects, this might be a good reference for everyone. Having an idea of how long people take in average for properly learning how to code can give a very good idea on how dense or long is the path. Someone who never programmed should take weeks or months, even years maybe while someone who's already experienced in the area and know at least 2 different languages might take days, hours or even minutes to start coding. But other than being able to write code that runs, there are ways to write the same program, and it's much harder to get deep knowledge on that than actually being able to program. And sometimes languages differ a lot from one to another on that aspect as well. For instance, we should never have to worry with code-injection in JavaScript like we do in C. So, is there any place we can see some good numbers for how long it takes to learn a language, maybe divided into level of knowledge categories, languages and paradigms, etc?

    Read the article

  • Perform tasks with delay, without delaying web response (ASP.NET)

    - by Tomas Lycken
    I'm working on a feature that needs to send two text messages with a 30 second delay, and it is crucial that both text messages are sent. Currently, this feature is built with ajax requests, that are sent with a 30 second javascript delay, but since this requires the user to have his browser open and left on the same page for at least 30 seconds, it is not a method I like. Instead, I have tried to solve this with threading. This is what I've done: Public Shared Sub Larma() Dim thread As New System.Threading.Thread(AddressOf Larma_Thread) thread.Start() End Sub Private Shared Sub Larma_Thread() StartaLarm() Thread.Sleep(1000 * 30) StoppaLarm() End Sub A web handler calls Larma(), and StartaLarm() and StoppaLarm() are the methods that send the first and second text messages respectively. However, I only get the first text message delivered - the second is never sent. Am I doing something wrong here? I have no deep understanding of how threading works in ASP.NET, so please let me know how to accomplish this.

    Read the article

  • How to fill a structure when a pointer to it, is passed as an argument to a function

    - by Ram
    I have a function: func (struct passwd* pw) { struct passwd* temp; struct passwd* save; temp = getpwnam("someuser"); /* since getpwnam returns a pointer to a static * data buffer, I am copying the returned struct * to a local struct. */ if(temp) { save = malloc(sizeof *save); if (save) { memcpy(save, temp, sizeof(struct passwd)); /* Here, I have to update passed pw* with this save struct. */ *pw = *save; /* (~ memcpy) */ } } } The function which calls func(pw) is able to get the updated information. But is it fine to use it as above. The statement *pw = *save is not a deep copy. I do not want to copy each and every member of structure one by one like pw-pw_shell = strdup(save-pw_shell) etc. Is there any better way to do it? Thanks.

    Read the article

  • How to create a NSPredicate to find entries with leading numerical value?

    - by Toastor
    Hello, I'm using NSPredicates to fetch entities based on a name attribute. Creating a predicate for names beginning with letters was easy (@"name BEGINSWITH %@", searchLetter), however now I'd like to fetch all entities with a name that begins with a numerical value, or rather a non-alphabetical number. What would be the appropriate predicate expression here? Right now I don't want to get too deep into predicate programming, as this is all I need right now and time flies. So, please, don't point me to the Predicate Programming Guide, I just need that expression.. :) Thanks alot guys!

    Read the article

  • SQL SERVER – What the Business Says Is Not What the Business Wants

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Steve Jones. Steve raised a very interesting question; every DBA and Database Developer has already faced this situation. When I read the topic, I felt that I can write several different examples here. Today, I will cover this scenario, which seems quite amusing. Shrinking Database Earlier this year, I was working on SQL Server Performance Tuning consultancy; I had faced very interesting situation. No matter how much I attempt to reduce the fragmentation, I always end up with heavy fragmentation on the server. After careful research, I figured out that one of the jobs was continuously Shrinking the Database – which is a very bad practice. I have blogged about my experience over here SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server. I removed the incorrect shrinking process right away; once it was removed, everything continued working as it should be. After a couple of days, I learned that one of their DBAs had put back the same DBCC process. I requested the Senior DBA to find out what is going on and he came up with the following reason: “Business Requirement.” I cannot believe this! Now, it was time for me to go deep into the subject. Moreover, it had become necessary to understand the need. After talking to the concerned people here, I understood what they needed. Please read the exact business need in their own language. The Shrinking “Business Need” “We shrink the database because if we take backup after shrinking the database, the size of the same is smaller. Once we take backup, we have to send it to our remote location site. Our business requirement is that we need to always make sure that the file is smallest when we transfer it to remote server.” The backup is not affected in any way if you shrink the database or not. The size of backup will be the same. After a couple of the tests, they agreed with me. Shrinking will create performance issues for the same as it will introduce heavy fragmentation in the database. The Real Solution The real business need was that they needed the smallest possible backup file. We finally implemented a quick solution which they are still using to date. The solution was compressed backup. I have written about this subject in detail few years before SQL SERVER – 2008 – Introduction to New Feature of Backup Compression. Compressed backup not only creates a small filesize but also increases the speed of the database as well. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SIM to OIM Migration: A How-to Guide to Avoid Costly Mistakes (SDG Corporation)

    - by Darin Pendergraft
    In the fall of 2012, Oracle launched a major upgrade to its IDM portfolio: the 11gR2 release.  11gR2 had four major focus areas: More simplified and customizable user experience Support for cloud, mobile, and social applications Extreme scalability Clear upgrade path For SUN migration customers, it is critical to develop and execute a clearly defined plan prior to beginning this process.  The plan should include initiation and discovery, assessment and analysis, future state architecture, review and collaboration, and gap analysis.  To help better understand your upgrade choices, SDG, an Oracle partner has developed a series of three whitepapers focused on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration. In the second of this series on SUN Identity Manager (SIM) to Oracle Identity Manager (OIM) migration, Santosh Kumar Singh from SDG  discusses the proper steps that should be taken during the planning-to-post implementation phases to ensure a smooth transition from SIM to OIM. Read the whitepaper for Part 2: Download Part 2 from SDGC.com In the last of this series of white papers, Santosh will talk about Identity and Access Management best practices and how these need to be considered when going through with an OIM migration. If you have not taken the opportunity, please read the first in this series which discusses the Migration Approach, Methodology, and Tools for you to consider when planning a migration from SIM to OIM. Read the white paper for part 1: Download Part 1 from SDGC.com About the Author: Santosh Kumar Singh Identity and Access Management (IAM) Practice Leader Santosh, in his capacity as SDG Identity and Access Management (IAM) Practice Leader, has direct senior management responsibility for the firm's strategy, planning, competency building, and engagement deliverance for this Practice. He brings over 12+ years of extensive IT, business, and project management and delivery experience, primarily within enterprise directory, single sign-on (SSO) application, and federated identity services, provisioning solutions, role and password management, and security audit and enterprise blueprint. Santosh possesses strong architecture and implementation expertise in all areas within these technologies and has repeatedly lead teams in successfully deploying complex technical solutions. About SDG: SDG Corporation empowers forward thinking companies to strategize their future, realize their vision, and minimize their IT risk. SDG distinguishes itself by offering flexible business models to fit their clients’ needs; faster time-to-market with its pre-built solutions and frameworks; a broad-based foundation of domain experts, and deep program management expertise. (www.sdgc.com)

    Read the article

  • Tulsa Azure Boot Camp

    - by dmccollough
    Windows Azure Boot Camp presented by HyperVize & TulsaTech When: Thursday July 1st and Friday July 2nd Registration: Click here. Where: TulsaTech Riverside Campus 801 East 91st Street Tulsa, Ok 74132-4008 Click here for a map. Summary Tulsa Windows Azure Boot Camp is a comprehensive 2 day training program for members of the development community in Tulsa Oklahoma. At the conclusion of this program, the attendees should have a deep understanding of Azure, BPOS, and advanced development techniques for both platforms. Who should attend: Web Developers, Backend Developers, SQL DBAs, Consultants, & IT Leaders who are interested in using Azure for development, data storage, or processing. Both days is suggested, but if you can't attend both days, contact us for a special one day pass. Schedule: Day one of the training sessions will be held from July 1st 2010 between the hours of 9AM and 4:30PM. Topics covered on day 1: Azure Basics, Web Development, & Data Storage. Day two of the training sessions will be held from July 2nd 2010 between the hours of 9AM and 4:30PM. Topics covered on day 2: Architecture, Business Value, SOA Development, SQL Azure, & Advanced Development. Pre-requisites: If you want to stay up to speed on the Windows Azure Labs you will need to install the tools and updates listed on the Windows Azure Boot Camp website: http://windowsazurebootcamp.com/whattobring Boot Camp Agenda Day 1 – July1st 2010:  · 8:30 – 9:00 - Registration · 9:00 – 10:00 - Module 1: Intro to Azure & Cloud Computing · 10:00 – 11:00 - Module 2: Using Web Roles · 11:00 – Noon - Lab 1 & workstation configuration · Noon – 1:00 - Lunch · 1:00 – 2:00 - Module 3: Blobs · 2:00 – 3:00 - Module 4: Tables · 3:00 – 4:00 - Module 5: Queues · 4:00 – ? - Q&A / Open Discussion Day 2 – July 2nd 2010: · 9:00 – 10:00 - Module 6: Building a business with Azure · 10:00 – 11:00 - Module 7: Cloud Scenarios · 11:00 – Noon - Module 8: SQL Azure · Noon – 1:00 - Lunch · 1:00 – 2:00 - Module 9: Basic Worker Roles · 2:00 – 3:00 - Module 10: Advanced Worker Roles · 3:00 – 4:00 - Module 11: Azure Diagnostics · 4:00 –    ??? - Module 12: App Fabric  

    Read the article

  • OWB 11gR2 &ndash; Flexible and extensible

    - by David Allan
    The Oracle data integration extensibility capabilities are something I love, nothing more frustrating than a tool or platform that is very constraining. I think extensibility and flexibility are invaluable capabilities in the data integration arena. I liked Uli Bethke's posting on some extensibility capabilities with ODI (see Nesting ODI Substitution Method Calls here), he has some useful guidance on making customizations to existing KMs, nice to learn by example. I thought I'd illustrate the same capabilities with ODI's partner OWB for the OWB community. There is a whole new world of potential. The LKM/IKM/CKM/JKMs are the primary templates that are supported (plus the Oracle Target code template), so there is a lot of potential for customizing and extending the product in this release. Enough waffle... Diving in at the deep end from Uli's post, in OWB the table operator has a number of additional properties in OWB 11gR2 that let you annotate the column usage with ODI-like properties such as the slowly changing usage or for your own user-defined purpose as in Uli's post, below you see for the target table SALES_TARGET we can use the UD5 property which when assigned the code template (knowledge module) which has been modified with Uli's change we can do custom things such as creating indices - provides The code template used by the mapping has the additional step which is basically the code illustrated from Uli's posting just used directly, the ODI 10g substitution references also supported from within OWB's runtime. Now to see whether this does what we expect before we execute it, we can check out the generated code similar to how the traditional mapping generation and preview works, you do this by clicking on the 'Inspect Code' button on the execution units code template assignment. This then  creates another tab with prefix 'Code - <mapping name>' where the generated code is put, scrolling down we find the last step with the indices being created, looks good, so we are ready to deploy and execute. After executing the mapping we can then use the 'Audit Information' panel (select the mapping in the designer tree and click on View/Audit Information), this gives us a view of the execution where we can drill into the tasks that were executed and inspect both the template and the generated code that was executed and any potential errors. Reflecting back on earlier versions of OWB, these were the kinds of features that were always highly desirable, getting under the hood of the code generation and tweaking bit and pieces - fun and powerful stuff! We can step it up a bit here and explore some further ideas. The example below is a daisy-chained set of execution units where the intermediate table is a target of one unit and the source for another. We want that table to be a global temporary table, so can tweak the templates. Back to the copy of SQL Control Append (for demo purposes) we modify the create target table step to make the table a global temporary table, with the option of on commit preserve rows. You can get a feel for some of the customizations and changes possible, providing some great flexibility and extensibility for the data integration tools.

    Read the article

  • The Sitemap Paradox

    - by Jeff Atwood
    We use a sitemap on Stack Overflow, but I have mixed feelings about it. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site. Based on our two years' experience with sitemaps, there's something fundamentally paradoxical about the sitemap: Sitemaps are intended for sites that are hard to crawl properly. If Google can't successfully crawl your site to find a link, but is able to find it in the sitemap it gives the sitemap link no weight and will not index it! That's the sitemap paradox -- if your site isn't being properly crawled (for whatever reason), using a sitemap will not help you! Google goes out of their way to make no sitemap guarantees: "We cannot make any predictions or guarantees about when or if your URLs will be crawled or added to our index" citation "We don't guarantee that we'll crawl or index all of your URLs. For example, we won't crawl or index image URLs contained in your Sitemap." citation "submitting a Sitemap doesn't guarantee that all pages of your site will be crawled or included in our search results" citation Given that links found in sitemaps are merely recommendations, whereas links found on your own website proper are considered canonical ... it seems the only logical thing to do is avoid having a sitemap and make damn sure that Google and any other search engine can properly spider your site using the plain old standard web pages everyone else sees. By the time you have done that, and are getting spidered nice and thoroughly so Google can see that your own site links to these pages, and would be willing to crawl the links -- uh, why do we need a sitemap, again? The sitemap can be actively harmful, because it distracts you from ensuring that search engine spiders are able to successfully crawl your whole site. "Oh, it doesn't matter if the crawler can see it, we'll just slap those links in the sitemap!" Reality is quite the opposite in our experience. That seems more than a little ironic considering sitemaps were intended for sites that have a very deep collection of links or complex UI that may be hard to spider. In our experience, the sitemap does not help, because if Google can't find the link on your site proper, it won't index it from the sitemap anyway. We've seen this proven time and time again with Stack Overflow questions. Am I wrong? Do sitemaps make sense, and we're somehow just using them incorrectly?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >