Search Results

Search found 12981 results on 520 pages for 'domain parking'.

Page 252/520 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Will search engines reindex a page that has been set to redirect on to a newer site page?

    - by Luke Duddridge
    We were asked by a client to change a website so that any pages/Urls we were hosting on an older site would now redirect to a newer site hosted somewhere else and a different domain name to boot. We did this by changing each page in the IIS site management, to redirect to a url on their new domain instead of rendering a page locally. According to the redirect tool here: http://www.webconfs.com/redirect-check.php . What we have done is search engine friendly. Problem now is... the client has been on a course learning all about meta tags and so thinks they have a better understanding of the "matrix" (remember there is no spoon). As Google still has the older site appearing in a search, this isnt helping matters. I have tried to explain, we have to wait for Google to reindex. I'm not blowing smoke am I? I'm now starting to wonder... will the older site always appear in a search, even though the pages don't exist? Is there a better way I should be redirecting their site to ensure google will stop keeping an index of pages that no longer exist and would instead replace them with the content in the newer site? a suggestion on the site mentioned above is to use the code: Response.Status="301 Moved Permanently" Response.AddHeader "Location","http://www.new-url.com/" Does using the option in the IIS management tool to redirect the url not do the same?

    Read the article

  • What is the aim of this email? Is this a ping/sping? [closed]

    - by mplungjan
    Hi, I received this spam in my catch-all. As a webmaster of the domain it was sent to, I am really curious what the reason for this mail is. It was sent to a non-existent user "tania" on my domain - here I used mydomain.zzz - what do the sender want to achieve? Since many mail servers have stopped backscattering, not getting a bounce would not mean anything, would it? And if this is off topic, where inb the StackExchange WOULD it be on topic? Delivered-To: [email protected] Received: (qmail 8015 invoked from network); 27 Jan 2011 02:32:47 -0000 Received: from unknown (HELO p3pismtp01-021.prod.phx3.secureserver.net) ([10.6.12.26]) (envelope-sender <[email protected]>) by smtp35.prod.mesa1.secureserver.net (qmail-1.03) with SMTP for <[email protected]>; 27 Jan 2011 02:32:47 -0000 X-IronPort-Anti-Spam-Result: At4FAAlnQE1GVjtCVGdsb2JhbACWXo4gCwEWCA0YJLwyhU8EhRc Received: from mx.dt3ls.com ([70.86.59.66]) by p3pismtp01-021.prod.phx3.secureserver.net with ESMTP; 26 Jan 2011 19:32:47 -0700 Received: from 70.86.59.66 by mx.dt3ls.com (Merak 8.9.1) with ASMTP id JXF39710 for <[email protected]>; Wed, 26 Jan 2011 17:31:10 -0500 Return-Path: [email protected] Status: Message-ID: <20110126173109.4d9d6c3f2b@1c3c> From: "Tech Support" <[email protected]> To: <[email protected]> Subject: Information, as instructed. Date: Wed, 26 Jan 2011 17:31:09 -0500 X-Priority: 3 X-Mailer: General-Mailer v.3 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Quote: I give it to you not that you may remember time, but that you might forget it now and then for a moment and not spend all your breath trying to conquer it. Because no battle is ever won he said. They are not even fought. The field reveals to a man his own folly and despair, and victory is an illusion of philosophers and fools. William Faulkner The Sound and the Fury

    Read the article

  • What's the best practice to do SOA exception handling?

    - by sun1991
    Here's some interesting debate going on between me and my colleague when coming to handle SOA exceptions: On one side, I support what Juval Lowy said in Programming WCF Services 3rd Edition: As stated at the beginning of this chapter, it is a common illusion that clients care about errors or have anything meaningful to do when they occur. Any attempt to bake such capabilities into the client creates an inordinate degree of coupling between the client and the object, raising serious design questions. How could the client possibly know more about the error than the service, unless it is tightly coupled to it? What if the error originated several layers below the service—should the client be coupled to those lowlevel layers? Should the client try the call again? How often and how frequently? Should the client inform the user of the error? Is there a user? By having all service exceptions be indistinguishable from one another, WCF decouples the client from the service. The less the client knows about what happened on the service side, the more decoupled the interaction will be. On the other side, here's what my colleague suggest: I believe it’s simply incorrect, as it does not align with best practices in building a service oriented architecture and it ignores the general idea that there are problems that users are able to recover from, such as not keying a value correctly. If we considered only systems exceptions, perhaps this idea holds, but systems exceptions are only part of the exception domain. User recoverable exceptions are the other part of the domain and are likely to happen on a regular basis. I believe the correct way to build a service oriented architecture is to map user recoverable situations to checked exceptions, then to marshall each checked exception back to the client as a unique exception that client application programmers are able to handle appropriately. Marshall all runtime exceptions back to the client as a system exception, along with the stack trace so that it is easy to troubleshoot the root cause. I'd like to know what you think about this? Thank you.

    Read the article

  • postfix-dovecot email sending works with squirrel mail but not with Thunderbird?

    - by Mark S.
    I have setup an intranet email system using postfix, dovecot and squirrel mail, Which is working fine, I can send and receive mail to all users on the system. I presume that the issue is in the postfix configuration, because when I configure Thunderbird to send mail I am getting the following error: An error occurred while sending mail. The mail server responded: 4.1.8 <[email protected]>: Sender address rejected: Domain not found. Please check the message recipient [email protected] and try again. Also here is the relevant syslog entries: NOQUEUE: reject: RCPT from host1.intranetdomain.com [More Information] [192.168.11.1 [More Information] ]: 450 4.1.8 <[email protected]>: Sender address rejected: Domain not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<[127.0.0.1 [More Information] ]> I have configured MX records on the DNS server and they respond appropriately when I query them for those MX records, so I do not think that is the issue. I think that my issue is caused by the default configuration of: smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sender_restrictions = reject_unknown_sender_domain Since this is on an internal network and it will not be exposed to the internet as a whole which options can I remove safely?

    Read the article

  • My site has crashed .. anyone have some info ?

    - by marwan
    Hi all , I booked a domain name for my website from a hosting provider .I gave the domain name , along with ftp details to a freelancer to develop the site in wordpress . the freelancer developped and he got full payment , and the site and site was working fine ,etc .. From that time , I did not change the admin logging as well as ftp details , this means that such info is still known to the freelancer .. A week ago , I found that some links in my site was not working .. I sent him a mail about this , and he said that he will fix it if i give him ftp details . and I did so , next I found that the entire site is gone . then he sent me a mail , without I asked him , and he he said that there have been someone who got access to my server , and he removed all files of my site and he installed drupal instead .and that he can rebuild the site in one day , by charging a full fee of 250 usd again .. Can anyone know what I can do in this situation , to find who did such act , could it be the host provider or that freelancer ,, and if there is a possibility to have my site back top the server .. I will appreciate any info on this.. Regards , Thanks

    Read the article

  • Configuring Samba to allow Use of CUPS printer

    - by Skizz
    Having trouble with samba printing. I have a CUPS printer installed on an Ubuntu 11.04 server and that works great. When I try to configure samba to allow an XP machine to use the printer, it fails when printing. I can install the printer drivers for XP from the server and the printer appears in the XP printer control panels. When I try to print a test page from the XP machine I get this error in the system event log: Jun 27 20:33:29 FatController smbd[3571]: [2012/06/27 20:33:29, 0] rpc_server/srv_netlog_nt.c:603(_netr_ServerAuthenticate3) Jun 27 20:33:29 FatController smbd[3571]: _netr_ServerAuthenticate3: netlogon_creds_server_check failed. Rejecting auth request from client JAMES machine account JAMES$ Here's my smb.conf file: [global] server string = %h (Server) workgroup = SODOR encrypt passwords = true security = user os level = 255 preferred master = yes domain master = yes local master = yes logon path = \\%L\profile\%U logon drive = S: logon home = \\%L\home\%U domain logons = yes map to guest = Never guest ok = no dns proxy = no time server = yes logon script = logon.bat load printers = yes printing = cups printcap name = cups nt acl support = no interfaces = eth1 lo bind interfaces only = yes smb ports = 445 [netlogon] comment = Net Log On path = /home/samba/netlogon guest ok = no read only = yes browseable = no [profile] comment = User Profiles path = /home/samba/profiles read only = no create mask = 0600 directory mask = 0700 browseable = no store dos attributes = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes guest ok = no printable = yes [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes guest ok = no read only = yes write list = root, skizz Anyone know what the problem is and how to fix it? In addition to the above, I also get this error: Jun 27 21:56:35 FatController smbd[3571]: [2012/06/27 21:56:35, 0] printing/print_cups.c:1027(cups_job_submit) Jun 27 21:56:35 FatController smbd[3571]: Unable to print file to `Edward' - client-error-not-authorized which I think is more relevant.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-12

    - by Bob Rhubart
    15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. Adding a runtime picker to a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena shows how to create an Oracle WebCenter popup to allow users to "select items or do more complex things." Oracle Identity Manager 11g R2 Catalog | Daniel Gralewski Oracle Fusion Middleware A-Team blogger Daniel Gralewski shares a detailed overview of the new Catalog feature, one of the most talked about features in the latest release of Oracle Identity Manager 11g. Cloud API and service designers, stop thinking small | Cloud Computing - InfoWorld "The focus must shift away from fine-grained APIs that provide some type of primitive service, such as pushing data to a block of storage or perhaps making a request to a cloud-rooted database," says InfoWorld's David Linthicum. "To go beyond primitives, you must understand how these services should be used in a much larger architectural context. In other words, you need to understand how businesses will employ these services to form real workplace solutions -- inside and outside the enterprise." Oracle Solaris 8 P2V with Oracle database 10.2 and ASM | Orgad Kimchi Orgad Kimchi's technical post illustrates the migration of "a Solaris 8 physical system, with Oracle database version 10.2.0.5 with ASM file-system located on a SAN storage, into a Solaris 8 branded zone inside a Solaris 10 guest domain on top of a Solaris 11 control domain." Thought for the Day "The hardest single part of building a software system is deciding precisely what to build. " — Fred Brooks Source: SoftwareQuotes.com

    Read the article

  • What is the `ServerName` attribute for apache2 and what does it do?

    - by freddydoggie
    I do not know what this config setting means. Does it mean that it registers a domain name? Is it like DNS? Here is what I have for my apache2 default config ServerName staugie.org ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks Indexes MultiViews AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> also, is there any way to register a free domain through the apache foundation?

    Read the article

  • Alternative to nofollow: custom 302 url shortener?

    - by Dogweather
    Here's the scenario: lots of blogging platforms make it tedious to insert nofollow into links within the post content. I.e., you need to edit the html, format it correctly, etc. I have a client who posts lots of content with links that should be nofollow'ed, and I thought of a novel way to handle this, since the blogging platform they're using makes it hard: I install a URL shortener web app on the client's domain. The shortener works as normal, except it redirects via 302 instead of 301. The pagerank will therefore stay at the shortener's domain, and not flow on to the target site. Part 2: In order to get the pagerank to collect meaningfully, say on the site's home page, the shortened URLs would be generated like this: /link?12345 instead of /link/12345. And then, the path /link would 301 to the home page. This way, the id is a param, not a path element. And thus, all the incoming shortened links are going to one path, which transfers pagerank to the home page. So that's my idea. I wanted to see if anybody could find problems with it. Thanks!

    Read the article

  • Pandaboard crash on startup or freeze after minutes

    - by Meach
    I just received my Pandaboard ES (rev B) and I am having trouble after installing ubuntu-omap4-addons. Once I copied the image ubuntu-12.04-preinstalled-desktop-armhf+omap4.img on my sd card and boot the pandaboard with it, I run the following commands: sudo add-apt-repository ppa:tiomap-dev/release sudo apt-get update sudo apt-get dist-upgrade sudo apt-get install ubuntu-omap4-extras At the end of the installation of ubuntu-omap4-extras, Ubuntu tells me that a problem occurs when the console displays: ldconfig deferred processing now taking place Clicking on "report the problem" tell me that the problem concerns pvr-omap4-dkms. I read somewhere that this can happen and it is better to reinstall pvr-omap4-dkms. Which I am doing by running: sudo apt-get install --reinstall pvr-omap4-dkms I reboot. Then the board has sometimes difficulties to start Ubuntu: it freezes during the loading page, only action I can do is unplugging the board to start it again. Some other times, Ubuntu load successfully but then freeze at another random time, in the range 20 - 40 minutes. I searched on internet for similar bug and found this: https://bugs.launchpad.net/ubuntu/+source/linux-ti-omap4/+bug/971091 So I typed this in: update-rc.d ondemand disable apt-get -y install cpufrequtils echo 'ENABLE="true" GOVERNOR="performance" MAX_SPEED="0" MIN_SPEED="0"' > /etc/default/cpufrequtils cpufreq-set -r -g performance reboot But it doesn't seems to fix the bug. Another detail: on startup, before the loading screen of Ubuntu (when there is the two penguins displayed :)), it shows this: [0.297271] CPU1: Unknown IPI message 0x1 [0.308990] omap_hwmod: mcpdm: _wait_target_ready error: -16 [0.354705] omap_mux_get_by_name: Could not find signal uart1_cts.uart1_cts [0.354766] omap_hwmod_mux_init: Could not allocate device mux entry [2.107086] thermal_get_slope:Getting slope is not supported for domain gpu [2.107116] thermal_get_offset:Getting offset is not supported for domain gpu [2.107299] stm_fw: vendor driver stm_ti1.0 registered [8.725555] OMAPRPC: Registration of OMAPRPC rpmsg service returned 0! debug=0 Any idea what can be wrong? I am not that good with Ubuntu so any help will be appreciated. Cheers! Meach

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • use subdomain on different host

    - by Roy
    I want to accomplish something that I thought was simple. My wish is as follows: I have a domainname with hosting, a WordPress multisite (with subfolder setup) installed and running: gangleri.nl. I have another domain at another host and without hosting: monas.nl I created a subdomain on gangleri.nl: monas.gangleri.nl and the domain redirects to that subdomain. Now what I want is to have monas.nl act like a website, not a website in a subdomain. I would like to have post urls as in monas.nl/posttitle. I first thought to do this with the DNS settings of Monas.nl. I now have an URL forward, CURL is not what I want and I did not manage to get A-records or CNAMEs to work. I tried using the htaccess file of the WP installation in monas.gangleri.nl. I tried 301, rewrite and whatnot, but also without success. Meanwhile, I have been reading so much that I no longer have a clue what to do. A-record doesn't sound probable, since I have no IP for the subdomain, so an A-record would point to gangleri.nl rather than using the subdomain. Also I have no idea if I should do something in the DNS settings of gangleri.nl or monas.nl, both, one of them and something somewhere else. I have the idea that I've tried everything, but the more I try and read about it, the less I can get my head around. People talking about A-records to subdomains while I can only use IPs, CNAME settings that my host doesn't support or something. Could somebody tell me if what I want is possible and if so, take me by the hand and guide me through it?

    Read the article

  • Using the Java SE 8 Date Time API with JPA 2.1

    - by reza_rahman
    Most of you are hopefully aware of the new Date Time API included in Java SE 8. If you are not, you should check them out right now using the Java Tutorial Trail dedicated to the topic. It is a significantly leap forward in processing temporal data in Java. For those who already use Joda-Time the changes will look very familiar - very simplistically speaking the Java SE 8 feature is basically Joda-Time standardized. Quite naturally you will likely want to use the new Date Time APIs in your JPA domain model to better represent temporal data. The problem is that JPA 2.1 will not support the new API out of the box. So what are you to do? Fortunately you can make use of fairly simple JPA 2.1 Type Converters to use the Date Time API in your JPA domain classes. Steven Gertiser shows you how to do it in an extremely well written blog entry. Besides explaining the problem and the solution the entry is actually very good for getting a better understanding of JPA 2.1 Type Converters as well. I think such a set of converters may be a good fit for Apache DeltaSpike as a Java EE 7 extension? In case you are wondering about Java SE 8 support in the JPA specification itself, Nick Williams has already entered an excellent, well researched JIRA entry asking for such support in a future version of the JPA specification that's well worth looking at. Another possibility of course is for JPA providers to start supporting the Date Time API natively before anything is formalized in the specification. What do you think?

    Read the article

  • Google is not treating two Australian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and thus domain). All schools in the state are issued a <school-name>.qld.edu.au subdomain, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as calvary christian college townsville. The green data here is for one school (the Townsville school, as per search term), and the red data is for the other school. I've put a demotion in for this 6 months ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime in the next few days. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au? We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • Organization standards for large programs

    - by Chronicide
    I'm the only software developer at the company where I work. I was hired straight out of college, and I've been working here for several years. When I started, eveeryone was managing their own data as they saw fit (lots of filing cabinets). Until recently, I've only been tasked with small standalone projects to help with simple workflows. In the beginning of the year I was asked to make a replacement for their HR software. I used SQL Server, Entity Framework, WPF, along with MVVM and Repository/Unit of work patterns. It was a huge hit. I was very happy with how it went, and it was a very solid program. As such, my employer asked me to expand this program into a corporate dashboard that tracks all of their various corporate data domains (People, Salary, Vehicles/Assets, Statistics, etc.) I use integrated authentication, and due to the initial HR build, I can map users to people in positions, so I know who is who when they open the program, and I can show each person a customized dashboard given their work functions. My concern is that I've never worked on such a large project. I'm planning, meeting with end users, developing, documenting, testing and deploying it on my own. I'm part way through the second addition, and I'm seeing that my code is getting disorganized. It's still programmed well, I'm just struggling with the organization of namespaces, classes and the database model. Are there any good guidelines to follow that will help me keep everything straight? As I have it now, I have folders for Data, Repositories/Unit of Work, Views, View Models, XAML Resources and Miscellaneous Utilities. Should I make parent folders for each data domain? Should I make separate EF models per domain instead of the one I have for the entire database? Are there any standards out there for organizing large programs that span multiple data domains? I would appreciate any suggestions.

    Read the article

  • page rank 0 penalty

    - by mark
    I have a wordpress blog and a www-website on the same domain for about one year. Together it is about 170 pages. The page rank is still 0. I understand that page rank 0 is a penalty for duplicate content. The pages are indexed in google but still no page rank. In google webmaster tools there is no indication for any problem. I asked for reconsideration of both blog and website a month ago. Google accepted the reconsideration but it did not change anything. Other pages of similar size and similar audience earn PR 4-6. Is there something I can do in order to get a fair page rank? A coworker told me that it might be the case that a link farm is using the content and I can do nothing about it. Is there a reliable way to check for something like that? I do not like to give up so quickly is there a chance to fix this by for example moving to another domain?

    Read the article

  • Best Method/Library For Remote Authentication

    - by Mike
    I have a web app that has a REST API interface: http://api.example.com/core that uses API Keys and domain specific keys (key has to be used on the specified domain). I then will have several client sites with ajax forms where we will require users to sign in before being able to submit the form. This form will add data to a table, and submit an email to several recipients along with checking credentials. This form will use an ajax submit to our REST API. All Communication to/from the API is over SSL Ideal Flow: Visitor Fills Form Out -> Enters User/pass -> Submits Form -> ajax request to REST API -> API Verifies credentials -> does CRUD -> sends emails -> returns 200/403 -> perform DOM manipulation based on return code in ajax call Are there any libraries in PHP that currently do something to this similarly? Would OAuth be a good use for this scenario? Languages used are: js/html/css/php/MySQL

    Read the article

  • What's the canonical process for backing up a website?

    - by Walkerneo
    This is going to sound terrible, but bear with me. I currently have a cron job that does a mysql dump, a git add all and commit, and a git push to bitbucket. I set this up almost a year ago, when I didn't know much about git, backups, and general web development and administration. I haven't had the time to fix this and do it properly, but the repo has now grown quite big from accumulating large temporary files from my forum, so now I have to do something and I want to do it properly this time around. What processes do semi-large websites and personal site admins use for backing up server content? Based on what I've learned since I set this up, what I'm currently think of doing is: Making changes on a development domain and committing the code frequently Archiving the entire site after a successful deployment from the development domain Having automatic daily database and user-content backups. I still like the idea of backing up sqldumps with git, though. I know git isn't a backup tool and that this is beyond its purpose, but the textual queries that are exported would be easily managed by git and would save a lot of space in archives.

    Read the article

  • SEO and internal links

    - by hanazair
    I'm fairly new to SEO and although I've read many articles on the topic I still don't have a clear idea of how to get my client's website get to the first page of Google Search. I run MOZ competitor analysis and see that a competitor that comes up at the top of Google Search has approximately same Domain Authority, Domain Moz Rank and Trust. They have 8 External Linking Root Domains while my client's site has five. Yet the competitor comes up as one of the top sites on the first page, and my client's side is on page #3. Then I noticed one drastic difference in competitor's ranking and that is Total Links. He has 1,388! I don't understand how this could be a positive factor in Search Engine ranking and how can they legitimately have 1,388 links (while only 14 of those are external). Another competitor who is #2 in search engine rankings has 773 links total with only 14 external links. It seems fishy, but yet there they are - at the top of the search engine results. Is that some current way to trick Search Engines? What to do if I'd like to get my client's website onto the first page by some legitimate means? Thanks.

    Read the article

  • Google Analytics Social Tracking implementation. Is Google's example correct?

    - by s_a
    The current Google Analytics help page on Social tracking (developers.google.com/analytics/devguides/collection/gajs/gaTrackingSocial?hl=es-419) links to this page with an example of the implementation: http://analytics-api-samples.googlecode.com/svn/trunk/src/tracking/javascript/v5/social/facebook_js_async.html I've followed the example carefully yet social interactions are not registered. This is the webpage with the non-working setup: http://bit.ly/1dA00dY (obscured domain as per Google's Webmaster Central recommendations for their product forums) This is the structure of the page: In the : ga async code copied from the analytics' page a script tag linking to stored in the same domain. the twitter js loading tag In the the fb-root div the facebook async loading js including the _ga.trackFacebook(); call the social buttons afterwards, like so: (with the proper URL) Tweet (with the proper handle) That's it. As far as I can tell, I have implemented it exactly like in the example, but likes and twitts aren't registered. I have also altered the ga_social_tracking.js to register the social interactions as events, adding the code below. It doesn't work either. What could be wrong? Thanks! Code added to ga_social_tracking.js var url = document.URL; var category = 'Social Media'; /* Facebook */ FB.Event.subscribe('edge.create', function(href, widget) { _gaq.push(['_trackEvent', category, 'Facebook', url]); }); /* Twitter */ twttr.events.bind('tweet', function(event) { _gaq.push(['_trackEvent', category, 'Twitter', url]); });

    Read the article

  • URL is generating a /#!/splash-page

    - by user32642
    My site for some reason is generating a shebang - /#!/splash-page on the URL. For example when I type www.modernvintage1005.com, the browser returns www.modernvintage1005.com/#!/splash-page and every subsequent page is /#!/about, /#!/contact, and so forth. There's absolutely nothing on the Google about this. There is a lot of rewrite help to eliminate .index.php from the home page, but that's it. How do I rewrite it to just say domain.com and domain.com/about.html, etc.? Here is my .htaccess file if you need to see it. # Rewrite Rule <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # compress text, html, javascript, css, xml: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript AddType x-font/otf .otf AddType x-font/ttf .ttf AddType x-font/eot .eot AddType x-font/woff .woff AddType image/x-icon .ico AddType image/png .png </IfModule> ## EXPIRES CACHING ## <IfModule mod_expires.c> ExpiresActive On ExpiresByType image/jpg "access 1 year" ExpiresByType image/jpeg "access 1 year" ExpiresByType image/gif "access 1 year" ExpiresByType image/png "access 1 year" ExpiresByType text/css "access 1 month" ExpiresByType application/pdf "access 1 month" ExpiresByType text/x-javascript "access 1 month" ExpiresByType application/x-shockwave-flash "access 1 month" ExpiresByType image/x-icon "access 1 year" ExpiresDefault "access 2 days" </IfModule> ## EXPIRES CACHING ##

    Read the article

  • Run script at user login as root, with a catch

    - by tubaguy50035
    I'm trying to run a PHP script as root on user login. The PHP script adds a Samba share to the Samba config, thus the need for root privileges. The only issue here, is that the user doesn't exist yet. This system is integrated with active directory. So when a user logs in for the first time, a home directory for them is created under /home/DOMAIN/username. I've found this question and that seems like the correct way to get what I want, but I'm having trouble with the syntax since I don't know the user's name. Would it be something like: ALL ALL=(ALL) NOPASSWD: /home/DOMAIN/*/createSambaShare.php This doesn't seem to work as it is currently. Anyone have any ideas or a "scripted" way to add a Samba share on user login? Since I've made other changes to /etc/skel, I just added the bash necessary to run the PHP script in .profile in there. This then gets copied to the "new" user's home and it tries to run the PHP script. But it fails, because these are not privileged users. Changing permissions on the PHP script will not help. It needs to be run as sudo because it opens the Samba config file for writing. Letting any user run the PHP script would result in a PHP error. The homes Samba directive doesn't work for my use case. I need the Samba share to exist once they exist on the server, even when they're not logged in.

    Read the article

  • Cannot submit change of address to subdomain in Google Webmaster Tools?

    - by RCNeil
    I am pointing several domains to one URL, a URL which happens to include a subdomain. ALL of the domains are using 301 redirects to point to this new address. One of the older domains (which used to be a site) is a 'property' in Webmaster Tools, as is the new site (the one with the subdomain.) When registering a 'Change of Address' for the old site with WebmasterTools, it suggests the following method - Set up your content on your new domain. (done) Redirect content from your old site using 301 redirects. (done) Add and verify your new site to Webmaster Tools. (done) Then, directly below that, to proceed, it says Tell us the URL of your new domain: Your account doesn't contain any sites we can use for a change of address. Add and verify the new site, then try again. I have already submitted and verified the new site. The only reason I can fathom I am getting this error is because the new site includes a subdomain. Although I don't foresee getting punished for this, as I am correctly 301 redirecting traffic anyway, I'm curious as to why the Change of Address submission isn't working appropriately for me. Has anyone else had experience with this?

    Read the article

  • Dom U Installation on Ubuntu 11.10

    - by sridutt
    I am trying to add a DomU Operating system on Ubuntu 11.10. I have successfully installed Xen. Verified with xm info virsh-version which returns: Compiled against library: libvir 0.9.2 Using library: libvir 0.9.2 Using API: Xen 3.0.1 Running hypervisor: Xen 4.1. Now when I tried to install Dom0 it said: unable to connect to 'localhost:8000': , in VMM. So, I followed this bug link. I could now start adding DomU. When adding DomU, in last stage, it gives the following error: Unable to complete install: 'POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: device model '/usr/lib/xen/bin/qemu-dm' not found")' Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/create.py", line 1899, in do_install guest.start_install(False, meter=meter) File "/usr/lib/pymodules/python2.7/virtinst/Guest.py", line 1223, in start_install noboot) File "/usr/lib/pymodules/python2.7/virtinst/Guest.py", line 1291, in _create_guest dom = self.conn.createLinux(start_xml or final_xml, 0) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1686, in createLinux if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self) libvirtError: POST operation failed: xend_post: error from xen daemon: (xend.err "Error creating domain: device model '/usr/lib/xen/bin/qemu-dm' not found") I tried following this bug link that said, the bug is solved in the below package. When I run ./configure in this, I am getting an error: checking for LIBXML... no checking libxml2 xml2-config >= 2.6.0 ... configure: error: Could not find libxml2 anywhere (see config.log for details). What is the problem?

    Read the article

  • Is sending data to a server via a script tag an outdated paradigm?

    - by KingOfHypocrites
    I inherited some old javascript code for a website tracker that submits data to the server using a script url: var src = "http://domain.zzz/log/method?value1=x&value2=x" var e = document.createElement('script'); e.src = src; I guess the idea was that cross domain requests didn't haven't to be enabled perhaps. Also it was written back in 2005. I'm not sure how well XmlHttpRequests were supported at the time. Anyone could stick this on their website and send data to our server for logging and it ideally would work in most any browser with javascript. The main limitation is all the server can do is send back javascript code and each request has to wait for a response from the server (in the form of a generic acknowledgement javascript method call) to know it was received, then it sends the next. I can't find anyone doing this online or any metrics as to whether this faster or more secure than XmlHttpRequests. I don't know if this is just an old way of doing things or it's still the best way to send data to the server when you are mostly trying to send data one way and you need the best performance possible. So in summary is sending data via a script tag an outdated paradigm? Should I abandon in favor of using XmlHttpRequsts?

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >