Search Results

Search found 28121 results on 1125 pages for 'microsoft surface pro'.

Page 444/1125 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Legality of using/embedding MP3s?

    - by Pogopuschel
    I am working on a language learning startup. I think that one of the best ways to study is through music. For that purpose I would like to include MP3s on the website, together with related study tools such as appropriate lyrics. Because I want to avoid dead links I would like to host the MP3s directly on my server and stream them to the users. Doing this isn't exactly legal since everyone could download the MP3s. But what if, before accessing a specific song, I displayed a message asking "Do you legally own this song in CD/MP3/... format?" and only if the user clicks "yes" he is allowed to continue and listen. Isn't this how, for example, YouTube gets around legal problems? Does anybody have insight on this? Thank you!

    Read the article

  • Web pages with mixed ownership photos

    - by dstonek
    I have a photo website. 15% of the photos belong to approved registered users. They agree my terms about uploading their images in my web pages. I include a photographer credit on right bottom corner. About identifying the site with google, every page contains a google+ button to MY google+ page it also contains <link href="https://plus.google.com/nnnnnnnnnn/" rel="publisher" /> I need some advice in order to respect google rules about my pages containing other photographers images not to be penalized because of possible duplicated or interpreted as stolen content. My concern is also about adding G+ links (to MY photo page) and Google publisher id would harm my site rank because of pages containing third-party photos.

    Read the article

  • Does setting document.domain via script interfere with Google Analytics?

    - by Seth Petry-Johnson
    I have a site, www.example.com, that displays some secure content from forms.example.com in iframes. To enable cross-frame navigation, pages on both sites use JavaScript to set the document.domain to just "example.com". I am using Google Analytics on www.example.com, but the GA site is not showing any data. It indicates that the tracking code is found (the status icon is a green checkmark), but no data is reported. The GA profile lists the website as "www.example.com". Is this a supported scenario? Is my script interfering with the GA code in some way?

    Read the article

  • Website directory structure regarding subdomains, www, and "global" content

    - by Pawnguy7
    I am trying to make a homemade HTTP server. It occurs to me, though, I never fully understood what you might call "relativity" among web pages. I have come across that www. is a subdomain, and I understand its original purpose. IT sounds like, in general, you would redirect (is that 301 or 302?) it to a... non-subdomain, sort of. As in, redirecting www.example.com to example.com. I am not entirely sure how to make this work when retrieving files for an HTTP server though. I would assume that example.com would be the root, and www manifests as a folder within it. I am unsure. There is also the question of multi-level subdomains, e.g. subdomain2.subdomain1.example.com. It seems to me they are structured "backwards", where you go from the root left in folder structure. In this situation, subdomain2 is a directory within subdomain1, which is a directory in the root. Finally, it occurs to me I might want a sort of global location. For example, maybe all subdomains still use an image as a logo. It makes more sense to me that there is one image, rather than each having a copy. In the same way, albiet more doubtfully, you might have global CSS (though that is a bit contrary to the idea of a subdomain in the first place), or a javascript that is commonly used. (more efficient than each having its copy and better for organization purposes). Finally, mabye you have a global 404 page. I think this might be the case where you have user-created subdomains (say bloggername.example.com), where example.com still has a default 404 when either a subdomain does not exist or page does not exists under a valid blogger. I am confused on what the directory structure for this should be. To summarize: Should and how it have a global files not in a subdirectory, how should www. be handled, (or how a now www or other subdomain should be handled), and the pattern for root/subdomain, as well as subdomain within subdomains (order-wise). Sorry this is multiple questions, but I feel at the root they are all related to the directory.

    Read the article

  • IP address and SEO

    - by Joel
    Hello, I currently host 5 websites within a dedicated server I own. I have several questions: Does it matter if I host all my sites on 100.100.100.100 (the server's IP for example) or if I split them into 100.100.100.100, 100.100.100.101 ... 100.100.100.104 (that is, each site on its own IP). Does it matter if I use a C-Class for each website? Do search engines really care if your site has its own c-class? Do search engines penalize a website if it moves its IP? Thanks, Joel

    Read the article

  • Should I use subdomains or subfolders for my user groups?

    - by bilygates
    Hello, I run a photography website where each user has its own subdomain (i.e. user.site.com). I'm thinking of adding user groups but I'm unable to decide if I should also associate a separate subdomain or simply a subfolder for each group: Subfolders (www.site.com/groups/my-group) Pros: Easier to maintain from a tehnical p.o.v. Cons: Harder to memorize. The URLs can get really long (www.site.com/groups/my-group/albums/my-album/) Subdomains (my-group.site.com) Pros: Easier to memorize. Shorter URLs. One might have the impression that such an URL is somewhat more "independent" from the main site. Cons: Group and user names belong to the same name space, so we need to check for collisions when creating a new user/group. One cannot determine the content of the page by only reading the URL: Is x.site.com a user page or a group page? What's your opinion on the matter? I should note that DeviantArt.com uses the 2nd option (that's where I got the idea). Thank you in advance!

    Read the article

  • What's the simplest way to create a page with dynamic elements?

    - by ElendilTheTall
    I'm developing a site, part of which lists training courses with dates and prices. Every year the dates and prices change, which would mean loads of manual code editing to update the pages. What I'd like to do is have a database containing the relevant information, which the course pages then reference, so we can just update the database rather than the HTML. My experience lies in static design - so, what is a simple way to achieve this?

    Read the article

  • Image slider not working when website is hosted on remote server [on hold]

    - by Tushar Khatiwada
    I'm having a different problem. I made a html website and it contains Nivo Slider in the index page. The site is working perfectly when viewed locally. I uploaded the site to remote server but the slider is not being displayed and the photo from the gallery is not working as expected ( popups on the local pc). The url of the site is: http://d138444.u24.elitehostingwizard.com/ The screenshot from the local pc: http://postimg.org/image/lxiqzx7br/ Thanks

    Read the article

  • .htaccess ReWrite wildcard folder paths from host

    - by JHuangweb
    My desired result is change a file to root / from a N number of paths. For example: www.host.com/a/b/c/e/f/g/images/1.jpg, where A~G is not always given. Result: www.host.com/images/1.jpg This is what I have so far: www.host.com/a/images -- www.host.com/images Using: RewriteRule ^a\/images/$ images/$1 [L] What I need is a wildcard in front of /images/ Like this: RewriteRule ^*/images/$ images/$1 [L] How can I do this correctly in .htaccess?

    Read the article

  • How to specify importance of html elements?

    - by Julien Fouilhé
    Is it possible to specify what elements of the page are important, or, more specifically, what elements of the page are not important? I'm using HTML5 new elements (nav, header, footer, section, article, aside...), but in the description of the website, there's sometimes my login form (in the header of my page though) in the Google description of my website pages... Is there a solution to resolve this problem? Thank you.

    Read the article

  • Is there a way to learn why Google penalized a site?

    - by pawelbrodzinski
    Is there any way to learn for sure why Google penalized a specific site? I think about situation when webmaster/site administrator is aware about Google rules and is sure they aren't breaking any, but the site is penalized nevertheless. The only information you get from Google is that they processed your reconsideration request but they neither say what is the result nor what is the penalty reason if they keep the site penalized. You can try to get information on Google webmasters forum or here but most of the time these are only speculations. Considering the site administrator tried to find out what's wrong but failed, is there a source which can tell what is the problem?

    Read the article

  • Finding what is causing my site to issue 301 redirects

    - by php-b-grader
    I have an URL which is 301 redirecting but I cannot find where or how it is happening and wanted some checks to perform if possible? I've checked .htaccess - it's not there I've checked cPanel in redirects section - it's not there In WordPress, I have the redirection plugin active and it's not there either Is there anywhere else that could be issuing redirects? I'm at a loss to find out where and how the page is redirecting!

    Read the article

  • What are good options for hosting video that give you privacy and control (not youtube or vimeo)?

    - by Rezen
    I have used http://www.longtailvideo.com/bits-on-the-run,http://www.influxis.com/, wistia for video hosting. Wistia didn't allow the finer control that we wanted to have. Influxis doesn't have the features that Bits on the Run has but platform usage for BOTR gets expensive. I was thinking of moving the videos to Amazon CDN. What are your thoughts and experiences with video hosting and are there any recommendations? Videos will be privately streamed to 100's of doctors offices.

    Read the article

  • Best way to redirect in IIS

    - by stephmoreland
    We have a website that has two URLs (one for the US side and another for the Canadian side which is then broken into Canadian English and Canadian French). For the purposes of my question, I will write as: www.us_url.com (US) www.canada_url.ca/ca_en/ (Canadian English) www.canada_url.ca/ca_fr/ (Canadian French) To make sure people are on the correct site, what do I do if they go to the US URL with Canadian English content (e.g. www.us_url.com/ca_en/canada.asp) but I want to make sure the URL is the Canadian one (e.g. www.canada_url.ca/ca_en/canada.asp) so it shows up properly in Google Analytics. We're using IIS 7 and classic ASP.

    Read the article

  • Alternative to TOP in SQL Server and Oracle.

    SELECT TOP 5 * FROM EMP ORDER BY SALARY; Above query works in SQL Server. This returns top 5 employees. The problem with this query is it doesn't work with Oracle. In Oracle you would need to write the query as follows. SELECT * FROM EMP WHERE ROWNUM<=5 ORDER BY SALARY If you are looking for a query which runs in both Oracle and SQL Server. Please use below one. select * from (SELECT row_number() over( ORDER by SALARY) as rank, EMP.* FROM EMP) s1 where s1.rank <= 5; span.fullpost {display:none;}

    Read the article

  • Which is the appropriate content-type meta tag value ?

    - by Argoron
    I have a question about the meta tag content-type. When starting to build my site (HTML+PHP+JS), I copied a lot of the meta tags over from elsewhere, and I have, amongst others, the following: <meta http-equiv="content-type" content="application/xhtml+xml; charset=iso-8859-1" /> Now, I've seen that tag is being used a lot with the value "text/html". I've been searching the web but could not find a comprehensive explanation regarding what the difference between both is. The "text/html" intuitively sounds more straightforward to me. Should I change my tag to that, or might the "application/xhtml+xml" be an equivalent solution ? Alternatively, can anyone point me to a resource where the different values for these tags are listed and explained in a clear manner? Thanks in advance

    Read the article

  • How to make Facebook like button narrower than 225px?

    - by tog22
    I'm generating a like button with Facebook's 'standard' layout for my site via https://developers.facebook.com/docs/reference/plugins/like/ . I've set its width to 200 pixels, but notice that setting it to lower than 225 pixels has no effect, and the documentation on that page indeed specifies 225px as the minimum width for the standard layout. Unfortunately I need to make it 200 pixels wide to fit my site's design. Is there any way to force it into this width? (The site's at http://gwwc2.centreforeffectivealtruism.org/ if you want to have a play with Firebug, though the like button gets generated by javascript so you'd probably have to duplicate that page and edit its source.)

    Read the article

  • registration form with payment system ( paypal ) [closed]

    - by Alecs
    I'm using an ajax registration form plugin for my website and I'm thinking to implement also Paypal. Here is how I want to implement it : I have 3 labels ( Name, Phone, Email, ) and a " Buy " button. After the user is typing his name, phone and email they click on "Buy" and they will be redirected to the paypal payment page or if it's possible to stay on the same page. Probably, what I need to know is how to make the "Buy " only after the forms ( name, phone, email ) are validated. Is there a plugin, or a snippet of code already made to not start something which already exists.

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Suddenly my server reject all Post Requests

    - by Sharen Eayrs
    just go to meet-romance.com/test.htm The script there is simple. A form with a button <form action="test.htm" method="post"> <input name="Button1" type="submit" value="button" /> </form> It doesn't work. Press the button in firefox and I got connection reset thingy. I wonder why. It happens since yesterday. I have emigrated all domains that requires post requests somewhere else. I suppose a reset of server would fix that only to happen again some other time. So I wonder if anyone has a clue of why. All domains that require post have been moved to another server.

    Read the article

  • In Google Analytics, how can I determine the value of a page if no goals or revenue have been determined?

    - by Brandon Durham
    I have 4 years of data in Analytics with over 20 million pageviews for the entire site. No goals have ever been set up, and while the site is an ecommerce site, no ecommerce features in Google Analytics have ever been taken advantage of. So I have no way to determine what the actual value of a page is. I've been tasked with determining if a particular page on the site is worth keeping around. How might I use all standard data (pageviews, bounce rate, time on page, time on site, etc.) to help determine the value of this page? I really appreciate any help I can get!

    Read the article

  • How can I redirect all files in a directory that doesn't conform to a certain filename structure?

    - by user18842
    I have a website where a previous developer had updated several webpages. The issue is that the developer had made each new webpage with new filenames, and deleted the old filenames. I've worked with .htaccess redirects for a few months now, and have some understanding of the usage, however, I am stumped with this task. The old pages were named like so: www.domain.tld/subdir/file.html The new pages are named: www.domain.tld/subdir/file-new-name.html The first word of all new files is the exact name of the old file, and all new files have the same last 2 words. www.domain.tld/subdir/file1-new-name.html www.domain.tld/subdir/file2-new-name.html www.domain.tld/subdir/file3-new-name.html ect. We also need to be able to access the url: www.domain.tld/subdir/ The new files have been indexed by google (the old urls cause 404s, and need redirected to the new so that google will be friendly), and the client wants to keep the new filenames as they are more descriptive. I've attempted to redirect it in many different ways without success, but I'll show the one that stumps me the most RewriteBase / RewriteCond %{THE_REQUEST} !^subdir/.*\-new\-name\.html RewriteCond %{THE_REQUEST} !^subdir/$ RewriteRule ^subdir/(.*)\.html$ http://www.domain.tld/subdir/$1\-new\-name\.html [R=301,NC] When visiting www.domain.tld/subdir/file1.html in the browser, this causes a 403 Forbidden error with a url like so: www.domain.tld/subdir/file1-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name.html I'm certain it's probably something simple that I'm overlooking, can someone please help me get a proper redirect? Thanks so much in advance! EDIT I've also got all the old filenames saved on a separate document in case I need them set up like the following example: (file(1|2|3|4|5)|page(1|2|3|4|5)|a(l(l|lowed|ter)|ccept)

    Read the article

  • Is SEO affected negatively by having densely encoded identifiers of content in URLs?

    - by casperOne
    This isn't about where to put the id of a piece of unique content in URLs, but more about densely packing the URL (or, does it just not matter). Take for example, a hypothetical post in a blog: http://tempuri.org/123456789/seo-friendly-title The ID that uniquely identifies this is 123456789. This corresponds to a look-up and is the direct key in the underlying data store. However, I could encode that in say, hexadecimal, like so: http://tempuri.org/75bcd15/seo-friendly-title And that would be shorter. One could take it even further and have more compact encodings; since URLs are case sensitive, one could imagine an encoding that uses numbers, lowercase and uppercase letters, for a base of 62 (26 upper case + 26 lower case + 10 digits): 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz For a resulting URL of: http://tempuri.org/8M0kX/seo-friendly-title The question is, does densely packing the ID of the content (the requirement is that an ID is mandatory for look-ups) have a negative impact on SEO (and dare I ask, might it have any positive impact), or is it just not worth the time? Note that this is not for a URL shortening service, so saving space in the URL for browser limitation purposes is not an issue.

    Read the article

  • If I am in the USA, should I not have a hosting provider in another country? [on hold]

    - by johnny
    I saw the various questions on SO about this. I'm not sure they answered me. I'm in the USA. Someone asked me about hosting on a company in South Africa. They are not a big company. This person simply liked the company for whatever reason. I only saw one horror story. Not many reviews really. It is a small outfit from what I can tell. But, the fact of it being in South Africa, does that matter? Do people ever pursue legal action against hosting companies anyway? The users will all be in the States. edit: I'm not sure why this is unclear.

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >