Search Results

Search found 8461 results on 339 pages for 'disavow links'.

Page 40/339 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Can I use accepts_nested_attributes_for with checkboxes in a _form to select potential 'links' from a list

    - by Ryan
    In Rails 3: I have the following models: class System has_many :input_modes # name of the table with the join in it has_many :imodes, :through => :input_modes, :source => 'mode', :class_name => "Mode" has_many :output_modes has_many :omodes, :through => :output_modes, :source => 'mode', :class_name => 'Mode' end class InputMode # OutputMode is identical belongs_to :mode belongs_to :system end class Mode ... fields, i.e. name ... end That works nicely and I can assign lists of Modes to imodes and omodes as intended. What I'd like to do is use accepts_nested_attributes_for or some other such magic in the System model and build a view with a set of checkboxes. The set of valid Modes for a given System is defined elsewhere. I'm using checkboxes in the _form view to select which of the valid modes is actually set in imodes and omodes . I don't want to create new Modes from this view, just select from a list of pre-defined Modes. Below is what I'm currently using in my _form view. It generates a list of checkboxes, one for each allowed Mode for the System being edited. If the checkbox is ticked then that Mode is to be included in the imodes list. <% @allowed_modes.each do |mode| %> <li> <%= check_box_tag :imode_ids, mode.id, @system.imodes.include?(modifier), :name => 'imode_ids[]' %> <%= mode.name %> </li> <% end %> Which passes this into the controller in params: { ..., "imode_ids"=>["2", "14"], ... } In the controller#create I extract and assign the Modes that had their corresponding checkboxes ticked and add them to imodes with the following code: @system = System.new(params[:system]) # Note the the empty list that makes sure we clear the # list if none of the checkboxes are ticked if params.has_key?(:imode_ids) imodes = Mode.find(params[:imode_ids]) else imodes = [] end @system.imodes = imodes Once again that all works nicely but I'll have to copy that cludgey code into the other methods in the controller and I'd much prefer to use something more magical if possible. I feel like I've passed off the path of nice clean rails code and into the forest of "hacking around" rails; it works but I don't like it. What should I have done?

    Read the article

  • some links for graphics in C

    - by lego69
    hello to everyone, I'm looking for some tutorials which can teach about graphics on C, I tryed find it, but all I can find are discussions about special topics, I'm beginner, thanks in advance

    Read the article

  • Move links div aside h1, above tagline p

    - by noquierouser
    I have a page that has CSS media queries in it, and I was requested to do this: Mobile layout: Desktop layout: Now, the HTML code is placed like this: <div id="content"> <h1>the title</h1> <p>this is the tagline of the site</p> <ul> <li>link 1</li> <li>link 2</li> <li>link 3</li> </ul> </div> I'm having quite a problem trying to achieve the desktop layout. I tried wrapping <h1> and <p> in a <div> and style it with float: left, but it didn't look as requested (the tagline is wider). I also tried with position: absolute for the <ul>, but also didn't look as requested (make the #content wider is not an option). Do you have any suggestions to achieve this without using javascript? Update: I've uploaded the code to my Koding so you can see what I'm actually doing. This is the CSS. I'm also using normalize.css. The problem I'm having now is what do the different browsers show: I think the problem might be in how do the browsers calculate the tagline's width, but as you can try with the code, if you make the tagline's text shorter, it looks more like Opera's rendering. Have I stumbled with a bug or am I making some mistake in my CSS?

    Read the article

  • spaces or %20 in links turn into + signs when page is sent as an email

    - by Obay
    I am creating a web app that accepts input of news items (title, article, url). It has a page news.php which creates a summary of all news items inputted for specified dates, like so: News 4/25/2010 Title 1 [URL 1] Article 1 Title 2 [URL 2] Article 2 and so on... I have two other pages, namely preview.php and send.php , both of which call news.php through a file_get_contents() call. Everything works fine except when the URL contains spaces. During Preview, the urls get opened (FF: spaces are spaces, Chrome: spaces are %20). However, during Send, when received as emails, the urls don't get opened, because the spaces are converted into + signs. For example: 1. Preview in FF: http://www.example.com/this is the link.html 2. Preview in Chrome: http://www.example.com/this%20is%20the%20link.html 3. Viewed as email in both browsers: http://www.example.com/this+is+the+link.html Only #3 doesn't work (link doesn't get opened). Why are the spaces in the urls correct (spaces or %20) when previewed, but incorrect (+) when received in the emails, when in fact, the same page is generated by the same news.php? Any help appreciated :)

    Read the article

  • Javascript widgets: do links count as SEO backlinks? [closed]

    - by j0nes
    Possible Duplicate: How good is it for SEO if you have a widget that lives on other sites? On my website I offer an option to let users embed information from my site with some kind of "homepage widget". If a user wants to embed it in his website, he basically has to add one line of Javascript to his HTML files like this: <script src="http://mysite.com/myscript.php?some_options_here"></script> Inside the widget, I export some content from my website and of course create a link back to my website. This is done in Javascript with document.write. document.writeln("My great exported content"); document.writeln('<a href="http://mysite.com?ref=widget>Check mysite.com</a>'); I have Google Analytics set up to track whether the links in there get clicked, and they do. Now I am asking myself if Google recognizes these links as valid backlinks from the embedding domain. I know that Googlebot can parse and execute Javascript, but I have not found any references whether these links also count as "normal" backlinks.

    Read the article

  • How can I configure firefox to open links in the same window, but requests from external application

    - by Mnementh
    I hate it, when sites decide for me, which links should open in a new window, and which in the same. The back-button doesn't work. Good thing is, firefox has the option browser.link.open_newwindow. If I set this to 1, all links with target=blank open in the same window, as it should be. But now also clicks in external programs (like the email-client or newsreader) on links open this in the same window, destroying the already opened website. How can I configure firefox to open links in a website always in the same window, but in external programs opened URLs always in a new one?

    Read the article

  • How to create a good sitemap for dynamic website

    - by Saif Bechan
    I have a website with dynamic content and different kind of pages. I have some pages that rarely change, and I have pages like blogs that change often. The blog pages also have links for sorting, for example sorting on date, asc, desc. On some of the pages I also have links to different tabbed content, and links that are just anchor links. Now when I use a xml sitemap generator then all the links are thrown into the site, and so I don't think all the links are really relevant. The blogposts up until now are also taken into the sitemap. Is this really necessary? I think the links to the blogposts can be indexed just fine. Is the best way to make a sitemap just to manually assign the main menu links to the sitemap, or is indexing everything really recommended?

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • How do I delete hardlinks, symbolic links, junction points, etc please?

    - by jonny
    I could be wrong, but I'm yet to hear a valid argument for the exploitability that these things deliver...outweighing their very dubious / debatable functionality. They seem to me to be marginally handy, but I don't think I have any need for them. I do have a need for security, however. How can I delete their entire functionality permanently from my hard drive, please? Microsoft only has pages on how to create them; which seems almost peculiar to the point of being dubious (at least, to me...) And just a dumb command line question, am I correct in assuming fsutil hardlink list c: will enumerate every single hardlink on that drive? C:\Windows\system32>fsutil hardlink list c: \Windows\System32 Also, how do I delete symbolic links please ;) But I'd just rather have all symbolic linking and recursion-creating stuff removed, if that's possible? C:\Windows\system32>fsutil behavior query symlinkevaluation Local to local symbolic links are enabled. Local to remote symbolic links are enabled. Remote to local symbolic links are disabled. Remote to remote symbolic links are disabled.

    Read the article

  • How to create a good sitemap for dynamic website

    - by Saif Bechan
    I have a website with dynamic content and different kind of pages. I have some pages that rarely change, and I have pages like blogs that change often. The blog pages also have links for sorting, for example sorting on date, asc, desc. On some of the pages I also have links to different tabbed content, and links that are just anchor links. Now when I use a xml sitemap generator then all the links are thrown into the site, and so I don't think all the links are really relevant. The blogposts up until now are also taken into the sitemap. Is this really necessary? I think the links to the blogposts can be indexed just fine. Is the best way to make a sitemap just to manually assign the main menu links to the sitemap, or is indexing everything really recommended?

    Read the article

  • Is It bad for SEO to have internal redirected links? [closed]

    - by Jonas Lindqvist
    I have a large number of pages having similar but not identical content. Example: site.com/dream_dictionary_flying and site.com/dream_interpretation_flying. The problem is that although not being identical, they are sometimes on the edge of being duplicate content. The solution via redirect 301 in htaccess is simple and can be done in a minute, BUT, changing all existing links on the whole site from "/something" to "/something_else" would take ages, it would be thousands of manual changes taking x hundreds of hours. My question is this; is it bad for SEO to have internal links that are redirected, or rather HOW bad is it? For the human user it would not matter at all but from what I have experienced, the search engines don't like it. Is there any rule of thumb here? Please come back with your thoughts and experience on this. Thanks!

    Read the article

  • Will many links to the same page without nofollow penalize the host site in the search engine rankings?

    - by Evgeny
    May be a silly question, but I'll give it a shot :). On my forum app I would like to allow users with sufficiently high reputation display links to their home pages under every post - without the nofollow attribute (while lower rep users will have the nofollow) I am happy to help the site contributors improve rankings of their own, but not sure if this can actually deteriorate the rank of the host (the site that hosts those links) - as potentially the same link to the user's home page may be peppered in the pages of the host. What do you think? Thanks.

    Read the article

  • Should mobile webpages have hreflang links to non-mobile pages?

    - by Noam
    My site has multilingual links, which are specified like this on non-mobile pages: <link rel="alternate" hreflang="en" href="http://mydomain.com/page" /> <link rel="alternate" hreflang="jp" href="http://ja.mydomain.com/page" /> <link rel="alternate" hreflang="ko" href="http://ko.mydomain.com/page" /> In addition, these non-mobile pages link to a mobile version: <link rel="alternate" media="only screen and (max-width: 640px)" href="/mobile/page" /> Now the question is about what links should be in the mobile page, which isn't translated to different languages now. Is this enough: <link rel="canonical" href="/page"/> Or should I also have the same group of hreflangs that point to non-mobile pages?

    Read the article

  • Do navigation menu links negatively impact SEO for pages' content?

    - by Rodolfo
    I've always had my doubts about navigation menus effect on SEO. You know, the vertical menus on the top that show in every page in the site linking to main sections and subsections. My issue is that if not done dynamically (i.e. after page is loaded or something), from a search engine's point of view it probably looks like a whole bunch of links in the beginning of the page, and links that probably have nothing to do with the page being analyzed, so it's probably not only confusing it, but also giving link 'juice' to the wrong pages or reducing its value. When I've asked SEO people about this, I usually get a "Google is smart, they'll recognize it as a menu and ignore it" response, but I'm not convinced (and the 'Google is smart' argument sounds almost like religion discussion to me). So does it affect SEO negatively or not? Are there any official posts on this topic?

    Read the article

  • Multi language switch links translated or in current language?

    - by FFish
    Should I do: A: translate the language links in the current language: (if I am on the English version) <a href="en/">English</a> | <a href="it/">Italian</a> | <a href="fr/">French B: the links in the native languages: <a href="en/">English</a> | <a href="it/">Italiano</a> | <a href="fr/">Français</a> From a user perspective option B is obvious, but what about SEO?

    Read the article

  • How to find out top links in a website?

    - by Anil
    I want to know what are the best links in a site are? It may be pagerank wise or popularity wise. For example http://www.pragprog.com is a site. I want to find out what are the most relevant links this site has. The links should not be external pointing links. It should be of the same site. Do google or any similar site can tell such information?

    Read the article

  • Need to parse HTML document for links-- use a library like html5lib or something else?

    - by Luinithil
    I'm a very newbie webpage builder, currently working on creating a website that needs to change link colours according to the destination page. The links will be sorted into different classes (e.g. good, bad, neutral) by certain user input criteria-- e.g. links with content the user would find of interest is colored blue, stuff that the user (presumably) doesn't want to see is colored as normal text, etc. I reckon I need a way to parse the webpage for links to the content (stored in MySQL database), change the colors for all the links on the page (so I need to be able to change the link classes in the HTML as well) before outputting the adapted page to the user. I read that regex is not a good way to find those links-- so should I use a library, and if so, is html5lib good for what I'm doing?

    Read the article

  • How to save some values from an array in a controller in Rails?

    - by Alfred Nerstu
    I've got a links array that I'm saving to a database. The problem is that the records aren't saved in the order of the array ie links[1] is saved before links[2] and so on... This is a example from the view file: <p> <label for="links_9_label">Label</label> <input id="links_9_name" name="links[9][name]" size="30" type="text" /> <input id="links_9_url" name="links[9][url]" size="30" type="text" /> </p> And this is my controller: def create @links = params[:links].values.collect { |link| @user.links.new(link) } respond_to do |format| if @links.all?(&:valid?) @links.each(&:save!) flash[:notice] = 'Links were successfully created.' format.html { redirect_to(links_url) } else format.html { render :action => "new" } end end end Thanks in advance! Alfred

    Read the article

  • How to sort some values from an array in a controller in Rails?

    - by Alfred Nerstu
    I've got a links array that I'm saving to a database. The problem is that the records aren't saved in the order of the array ie links[1] is saved before links[2] and so on... This is a example from the view file: <p> <label for="links_9_label">Label</label> <input id="links_9_name" name="links[9][name]" size="30" type="text" /> <input id="links_9_url" name="links[9][url]" size="30" type="text" /> </p> And this is my controller: def create @links = params[:links].values.collect { |link| @user.links.new(link) } respond_to do |format| if @links.all?(&:valid?) @links.each(&:save!) flash[:notice] = 'Links were successfully created.' format.html { redirect_to(links_url) } else format.html { render :action => "new" } end end end Thanks in advance! Alfred

    Read the article

  • can canonical links be used to make 'duplicate' pages unique?

    - by merk
    We have a website that allows users to list items for sale. Think ebay - except we don't actually deal with selling the item, we just list it for sale and provide a way to contact the seller. Anyhow, in several cases sellers maybe have multiple units of an item for sale. We don't have a quantity field, so they upload each item as a separate listing (and using a quantity field is not an option). So we have a lot of pages which basically have the exact same info and only the item # might be different. The SEO guy we've started using has said we should put a canonical link on each page, and have the canonical link point to itself. So for example, www.mysite.com/something/ would have a canonical link of href="www.mysite.com/something/" This doesn't really seem kosher to me. I thought canonical links we're suppose to point to other pages. The SEO guy claims doing it this way will tell google all these pages are indeed unique, even if they do basically have the same content. This seems a little off to me since what's to stop a spammer from putting up a million pages and doing this as well? Can anyone tell me if the SEO guy's suggestion is valid or not? If it's not valid, then do i need to figure out some way to check for duplicated items and automatically pick one of the duplicates to serve as an original and generate canonical links based off that? Thanks in advance for any help

    Read the article

  • Do large number of internal broken links affect SEO?

    - by TheBigK
    We've a WordPress blog and had disqus plugin in stalled for several months. Around late August this year, the plugin created a ton of URLs that linked to non-existent location on our website. For example - Correct URL: domain.com/correct-URL/ Disqus created - domain.com/correct-URL/344322/ - Throws 404 domain.com/correct-URL/433466/ - Throws 404 So essentially, Google found a LARGE number of broken links that pointed to unknown locations on our own domain. As the count of those errors (404) rose, our site suffered massive drop in traffic and crawl rate dropped to 10% of what it was earlier. I wish to know - Can large number of (we've over 99k of them) internal broken links cause rankings to drop? I've fixed the issue in one go by creating 301 redirects for each bad URL to correct URL and removing disqus. Google however drops the count by ~1000 daily, as I mark errors as 'fixed' in Google Webmaster Tools. Is there any way to speed this up? Should I setup custom crawl rate to 'Fast' in GWT to make Google crawl our website faster? I'd appreciate your inputs and experience sharing.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >