Search Results

Search found 9717 results on 389 pages for 'pro metedor'.

Page 128/389 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Looking for someone to point me in the right direction. I want to learn how to use hosted servers

    - by Leisure
    TL;DR: I want a Java program to run on a server, I want the server to forward a particular port from external to internal IP, I want store a few files on the server. Guides please. So I made a hack job Java program that acts as a server for my android application. It stores data in text files and HTML files, uploads them via FTP to my webhost, and manages socket connections (using port forwarding) with any phones connected. Right now I'm running it on NetBeans on my home computer. I know that it will probably slow down or crash once about 50 phones are connected at once. Is there any way I can run this program on a server with a high bandwidth? Can someone please find me a guide for that? I'm noob and don't know where to start looking. I seriously don't know anything about renting or using servers - I need a nice guide, and recommendations. My requirements for the server: Can handle about 2k socket connections at once Can run my Java code and store my txt files Can give me a port and an IP address so TCP/IP clients can be connected My budget: $50 CAD per month. Please someone set my ship sailing in the right direction, I really don't know where to look for resources.

    Read the article

  • Sendmail encrypted

    - by user1948828
    I manage a website running on Apache. It has public and private areas. When people apply for an account to access the protected portions of the site, they do a TLS/SSL protected POST containing their information which is saved to a (hopefully) nonpublic directory on the server. Then I have a python script which takes URL Encoded POSTS with this user information, sends back a plaintext confirmation to the applicant, encrypts their information with a freeware java command-line utility to protect it (specifically this one: http://spi.dod.mil/ewizard.htm), base64 encodes them, puts them in a file as a mime attachment and uses sendmail to forward the file information to my (and several coworkers' scattered around the country) email account(s) on an Exchange server with Outlook clients. This has worked well for years, but is awkward because it involves manually decrypting the information on a windows box once it is received, using the above mentioned encryption utility. This significantly limits how many can be processed. I would like to be able to encrypt my information in a format that Outlook/Exchange can inherently understand and display so that these emails can be viewed simply by clicking on them. I do have company provided PKI public certs for all the people I need to send to, and am able to send/receive encrypted emails on Outlook manually, but would like to know how I can send to Outlook from apache/linux/python from the command line using the same PKI certs. Dont need to receive them, just send. Is there a utility that can do this? I had thought pgp might but I havent been able to figure it out.

    Read the article

  • redirect an old URL using web.config

    - by Dog
    i'm still very new to using URL rewrites and redirects and i'm having some problems on something i thought was quite simple... i've just rebuilt a website and want to redirect the old URLs to the new ones... for example : http://www.mydomain.com/about.asp?lang=1 should now be http://www.mydomain.com/content.asp?id=100230&title=about&langid=1 unfortunately, everything i've tried is giving me errors or simply does nothing. here is one rule i tried : <rule name="redirectoldabout" enabled="true" stopProcessing="true"> <match url="( .*)" negate="true" /> <conditions> <add input="{HTTP_HOST}" pattern="^mydomain\.com/about\.asp\?lang=1$" /> </conditions> <action type="Redirect" url="http://www.mydomain.com/content.asp?id=100230&title=about&langid=1" redirectType="Permanent" /> but i get an error back : HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. any suggestions as to what i'm doing wrong? thanks dog

    Read the article

  • Spam bot constantly hitting our site 800-1,000 times a day. Causing loss in sales

    - by akaDanPaul
    For the past 5 months our site has been receiving hits from these 4 sites below; sheratonbd.com newsheraton.com newsheration.com newsheratonltd.com Typically the exact url they come from looks something like this; http://www.newsheraton.com/ClickEarnArea.aspx?loginsession_expiredlogin=85 The spam bot goes to our homepage and stays there for about 1 min and then exist. Luckily we have some pretty beefy servers so it hasn't even come close to overloading our servers yet. Last month I started blocking the IP address's of the spam bots but they seem to keep getting new ones everyday. So far I have blocked over 200 IP address's, below are a few of the ones I have blocked. They all come from Bangladesh. 58.97.238.214 58.97.149.132 180.234.109.108 180.149.31.221 117.18.231.5 117.18.231.12 Since this has been going on for the past 5 months our real site traffic has started to drop, and everyday our orders get lower and lower. Also since these spam bots simply go to our homepage and then leave our bounce rate in analytics has sky rocketed. My questions are; Is it possible that these spam bots are affecting our SEO? 60% of our orders come from natural search, and since this whole thing has started orders have slowly been dropping. What would be the reason someone would want to waste resources in doing this to our site? IP's aren't free and either are domain names, what would be the goal in doing this to us? We have google adwords but don't advertise on extended networks nor advertise in Bangladesh since we don't ship there so they are not making money on adsense. Has anyone experienced anything similar to this? What did you do and what was the final out come?

    Read the article

  • Avoiding Duplicate Content Penalties on a Corporate/Franchise website

    - by heath
    My question is really an extension of a previous question that was ported from stackoverflow and closed so I cannot edit it. The basic gist is a regional franchise company has decided to force all independent stores into one website look; they currently all have their own domains and completely different websites. After reading the helpful answers and looking over some links provided, I think my solution is to put a 301 on each franchise store site (acme-store1.com, acme-store2.com, etc) back to the main corporate site (acme.com). All of the company history, product info, etc (about 90% of the entire site) applies to all stores. However, each store should have some exclusive content such as staff, location pictures, exclusive events and promotions, etc. I originally thought that I would simply do something like acme.com/store1/staff, acme.com/store2/staff, etc for the store exclusive content and then acme.com/our-company, for example, would cover all stores. However, I now see two issues that I don't know how to solve. They want to see site stats based on what store site they came from. If a user comes from acme-store1.com, is redirected to acme.com and hits several pages, don't I need to somehow keep that original site in the new url to track each page in that user's session and show they originally came from acme-store1.com? Each store is still independently owned and is essentially still in competition with the other stores, albeit, in less competition than they are with other brands. This is important because each store would like THEIR contact info, links to their social media pages, their mailing list sign-up and customer requests on EVERY page. So if a user originally goes to acme-store1.com and is redirected to acme.com, it still should look to the user that it's all about store 1, even though 90% of the content will be exactly the same as it is in the store 2, store 3 and corporate site. For example, acme.com/our-company would have the same company history, same header/footer/navigation, BUT depending on the original site the user came from, it would display contact and links to THAT store. If someone came directly to the corporate site, it would display their contact and links (they have their own as well). I was considering that all redirects would be to store1.acme.com, store2.acme.com, etc (or acme.com/store1) and then I can dynamically add the contact info and appropriate links based on the subdomain or subfolder. But, then I have to worry about duplicate content penalties because, again, about 90% of the text in these "subdomains" are all the same. For reference, this is a PHP5 site. I've already written a compact framework utilizing templates and mod-rewrite that I've used for other sites. Is this an easy fix that I'm just not grasping? Any suggestions?

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • Permanent links format in wordpress: How to Choose?

    - by BrownAndFriendly
    n wordpress, I have the option to choose how the permanent URL looks like. The common format is Year/Month/Day or Year/Month for blogs. However, I’ve occasionally seen some successful blogs take the date out: such as http://mixergy.com/dane-maxwell-zannee-interview/ What’s the impact of the above format on SEO? Obviously, it’s more pleasant on the eye but does it negatively impact search ranking? Thank you

    Read the article

  • want to change host account for google app email ?

    - by Sathyam Shivam Sundaram
    I have a standard ( Free ) Google app Email Service , From last 5 Months we are using this service. Our webiste was hosted on the Third Party Web Hosting Company. Nut Now iam planning to change my Web Hositing provider , but i want to keep my domian in the previous Hosting Company. Can Google App Allow this option of changing web hoster for the registred Domain in the Google App for Email Service. Is there any body done this ?

    Read the article

  • Single page not appearing in Google Search

    - by Dan
    Description I have a static franchise website which has various sub pages each dedicated to an individual franchisee. For each franchisee the page, the only thing slightly similar between all of them are the page titles, they follow this structure: <title> Welcome to THE_COMPANY - PRODUCT_DESCRIPTION Services, THE_LOCATION </title> THE_COMPANY and PRODUCT_DESCRIPTION are the same across all franchisees, however THE_LOCATION changes depeding on where they are located in the UK. Each franchisee page has the following <meta /> tags: <meta name="DC.creator" content="user"/> <meta name="DC.format" content="text/html"/> <meta name="DC.language" content="en"/> <meta name="DC.date.modified" content="2014-01-23T11:22:31+00:00"/> <meta name="DC.date.created" content="2014-01-23T11:22:09+00:00"/> <meta name="DC.type" content="Page"/> <meta name="DC.distribution" content="Global"/> <meta name="robots" content="ALL"/> <meta name="distribution" content="Global"/> The main content on each franchisee page is completely different. The Problem There is one particular franchisee page, located in Area A.. Which will not appear in Google Search results at all. However every single other franchisee (if you Google Search for "THE_COMPANY, THE_LOCATION" is number 1). And if I do the same search on Bing, Yahoo or DuckDuckGo, the Area A franchisee is the first result on all of them. Has Google for some reason black listed one page on the site? What I Have Tried Ensuring the page is referenced in my sitemap.xml file 'Fetching as Google Bot' the link www.the_company.co.uk/areaa When that came back as OK I would submit to index Resubmitting the sitemap.xml file in Webmaster Tools Linking to the Area A page from another pages content For this I also waited about 3 weeks before checking again to give Google time to re-index Making a change to the page content and waiting another 2 / 3 weeks Removing the page completely and recreating it with an alternative URL The closest thing I have found to this issue is this StackOverflow question but this particular franchisee has existed for almost a year, it used to appear on Google searches however no longer does. I'm guessing the Panda update wasn't too happy with something on the page, but it hasn't effected anything else on the site and I am at a loss for things to try. I would greatly appreciate any information or thoughts as to what could have caused this Thanks. Update In line with Daniel Fukudas answer below, I have followed some of his steps but everything seems to check out alright: HTTP Headers HTTP/1.1 200 OK => Date => Tue, 25 Feb 2014 16:31:29 GMT Server => Zope/(2.12.16, python 2.6.6, linux2) ZServer/1.1 Content-Length => 40078 Expires => Sat, 01 Jan 2000 00:00:00 GMT Content-Type => text/html;charset=utf-8 Content-Language => en Vary => Accept-Encoding Connection => close Robots <meta /> tag: <meta name="robots" content="ALL"/> I have updated this <meta /> tag to read content="INDEX" instead now. robots.txt: User-agent: * Disallow: User-Agent: Googlebot Disallow: /*sendto_form$ Disallow: /*folder_factories$ Using site:THE_COMPANY.co.uk: Searching for 'AREA A site:THE_COMPANY.co.uk' does not return the page, but regardless of that searching just for site:THE_COMPANY.co.uk will not necessarily return every indexed page, or so I understand... Update It appears Google likes to drop pages every now and then from the index, despite my steps above, I left the site alone and the page appeared back in the SERPs by itself.

    Read the article

  • WordPress injection?

    - by saul
    I don't really know how to express my problem, so bear with me. This is a bit hard to explain. I have a Wordpress installation, the latest, and often (once a day) my site redirects users to the /wp-admin/install.php file. Asking for my login credentials of course. I have tried reinstalling WordPress and still have not been able to figure what they are doing. That happens regularly. Also, a few hours later, I am able to see my site normally. Hope this makes sense. I suspect there myst be some database DoS that allows them to inject a redirect of some sort into my admin area, thus redirecting the user to said directory (install.php). But that's just me. I really have no clue what else could they be doing. I looked at the source code from several php files and noted some of them don't include a ? tag. Could that be an issue? My hosting company is iPage, I've contacted them and they say there's nothing wrong with my files. Anyone have a clue? I can paste the code to any source file.

    Read the article

  • What dangers await if I block non-standard, non-major-usa search engine bots from my USA only website?

    - by Ryan
    I noticed tons of bandwidth being used by non-USA search engine bots, so I began blocking them in an effort to save bandwidth and cpu cycles for actual users and the search engines they come from (Google, Bing, Yahoo, Ask, etc.). Other than potentially losing some international traffic (which isn't really important to us since all of our content is very USA-centric), what additional dangers should I be concerned about? I'm using a modified version of Jeff Starr's User Agent Blocklist

    Read the article

  • Is this a link scheme? If so, what to do? what problems can i face?

    - by guisasso
    I was asked to remodel a website, and decided to check its rank on alexa. Surprisingly, there are many, many different websites linking to it, none relevant. One particular thing about it is that none of these urls work, and they all display the exact same error when accessed, which to me is a very good indication that this is some sort of linking scheme. (besides the somewhat obvious names, it even says scheme in one of the urls !?) If so, how should i proceed about this website? What can i do if this is in fact a scheme, how can this hurt the website, what types of problems can i face, and what can i do about it? addurlnow . info dirlist15.addurlnow . info/Business___Economy/Services/page-12.html linkdirectory101 . info dirlist16.linkdirectory101 . info/Business___Economy/Services/page-15.html seonetblog . info dirlist52.seonetblog . info/Business___Economy/Affiliate_Schemes addurls . us dirlist21.addurls . us/Business___Economy/Services/page-10.html webdirectoriessite . info dirlist20.webdirectoriessite . info/Business___Economy/Services/page-6.html addurlstore . info dirlist10.addurlstore . info/business___economy/services/page-14.html ukwebdirectorys . info dirlist21.ukwebdirectorys . info/Business___Economy/Services/page-13.html

    Read the article

  • Is there a ways to see granular per-visit data in Google Analytics?

    - by jakub.g
    I've started using Google Analytics very recently and I'm a bit lost with the sea of options (have been using Sitemeter before for some time). I've clicked through the service a lot but couldn't find what I'm accustomed to. I can see multitude of aggregated statistics in GA like: charts of browser share lists of country share lists of most visited URLs within the page and so on, but I would actually like to analyze each of the visits themselves. Something like: User X, France, Chrome, 7 pageviews between 18:01 and 18:15, entered on a.htm and exited on b.htm User Y, UK, Firefox, 1 pageview at 18:20, entered on c.htm Is there an easy way to see the reports in this way (perhaps by clicking a link to a separate page to see that particular session's stats)? How to navigate there if so?

    Read the article

  • Valid robots.txt? [closed]

    - by psot
    I am waiting for Google to crawl my site and display the results in search. Is my robots.txt alright and will it let google, bing etc crawl my site? Thanks! User-agent: * Disallow: /cgi-bin/ Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /wp-content/ Disallow: /build/ Disallow: /css/ Disallow: /trackback/ Disallow: /comments Disallow: /assets/graphics/ Disallow: /assets/visual/ Disallow: /category/*/* Disallow: */trackback Disallow: */feed Disallow: */comments Disallow: /*?* Disallow: /*? User-agent: Slurp Disallow: / User-agent: Baiduspider Disallow: / User-agent: ia_archiver Disallow: / User-agent: duggmirror Disallow: / User-agent: Yandex Disallow: / Sitemap: http://example.com/sitemap.xml.gz

    Read the article

  • Cannot access personal website from home IP. More details inside.

    - by GX67
    This is a recent problem I've been having. My site can be accessed from almost everywhere else except from my home IP, where I do most of my editing/updating, etc. I've tested my connection from my school's network, a friend's connection from out of state (multiple states), and through a tethered connection with my friend's Android. It works in all those cases, both viewing, accessing the cPanel, and using FTP. Here's the problem that happens to me when I try to view it from my home IP: The page times out in Firefox, IE, and Chrome. Using the cmd, I ran tracert and ping, both as failed attempts. Log here. downforeveryoneorjustme.com says my site is up. So do the other site checkers. I can't access my cPanel or FTP accounts. I can't access the host site. (I use perfectz.info for hosting, and I can't access their site either.) System settings: No firewall enabled. Ports are seemingly properly forwarded. (e.g. The ports are open in the router settings, and are open everywhere else.) I have an email forwarder set up from the cPanel that works just fine. (i.e. I can receive emails sent to that address. If any other information is needed, I'll do my best to provide it. UPDATE @ilhan: I use two things: 1) The site cPanel from in-browser. 2) Dreamweaver CS5 FTP. @Matthias: I tested both, and it passes the dual stack with a 10/10. What should I do then?

    Read the article

  • Which registrar checks the most domains?

    - by Christian W
    When I want a new domain, I usually use GoDaddy to check, and another registrar to register. This is because GoDaddy check my wanted domain against the most TLD's. Are there any other sites/registrars that checks against more TLD's? What I want is to type my wanted second-level domain.. Ex: bobsplace. And then it searches through bobsplace.com bobsplace.net bobsplace.me etc, and reports back to me which is availible or not

    Read the article

  • Assign subdomains to separate ports on web server

    - by Michael Frank
    I have set up an Abyss web server as a little experiment, and I want to know if it is possible to assign subdomains to different ports on the machine the web server is running on. I have a couple webUIs that I'd like to assign subdomains: 192.168.1.1:8000 becomes example.com/webui1/ 192.168.1.1:8001 becomes example.com/webui2/ The webUIs are available by accessing their ports via example.com:8000. I have tried using a reverse proxy, but it seems that this is only usable on one internal IP at a time. What other options do I have? Answer is good, but my current set up doesn't meet the requirements. Abyss Web Server X2 is required to use Virtual Hosts with Abyss.

    Read the article

  • My site is getting popular, bandwidth expensive

    - by ElbertF
    I'm running a (small) image hosting site which has been getting a little bit of traction lately (2,000 to 5,000 visitors a day, spikes up to 10,000). It's hosted on Linode and I've already run out of the bandwidth for this month (300 GB). I've experimented with Amazon S3 but that was costing me $5 a day, Linode currently costs me $30 a month. I get a tiny bit of revenue from ads (Adsense and Black Label Ads) but it's not enough to break even. What should I do? Obviously I'd like to keep running the site but not if it starts costing me a lot of money.

    Read the article

  • .htaccess do not work without index.php on CodeIgniter

    - by Mattia
    I have read a lot of topic with the same problem but I do not find the solution. I have a LAMP into Ubuntu server. My document root is /home/utente/ into this dir I have another dir (turni) with a CodeIgniter web app. The web app works fine with the index.php into the URL, but I want to eliminate it. I have this configuration: config.php into CodeIgniter: $config['index_page'] = ''; .htaccess: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] /etc/apache2/sites-available/default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/utente <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/utente/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> When I open a link of the web app without index.php into the URL, the server show me this error: The requested URL /turni/auth/login was not found on this server. Why? If I put the index.php like /turni/index.php/auth/login all works fine.

    Read the article

  • Server overhead caused by bots?

    - by giuseppe
    I have one customer website causing overhead (http://www.modacalcio.it/en/by-kind/football-boots.html). With htop opened, I am trying navigate the website and the much load of the website is done by the ajax link being placed on the left side of the website. The website is hosted by a VPS with 3 proc and 2GB RAM, with enough hard with disk space. The real problem is that this website is new and not visited much. From the http-status module I am seeing that the overhead is caused by bots (Google bots, Bing bots, hrefs checker and so on). So I thought that's probably due to those spiders trying to crawl all those links at once - could this be causing this overhead? I have also put rel="nofollow" in those links, but this doesn't keep the bots away. Is there any way through code or Plesk to disable those links to those bots?

    Read the article

  • Is it time to drop Courier from your monospace font stacks?

    - by Jeff
    I've been fine-tuning my font stacks lately and was wondering if it's safe to drop Courier from my monospace font stack yet? Would you feel comfortable dropping it? Of course, monospace is my final fallback. Note 1: OS Testbed: WinXP, WinVista, Win7, iPhone, iPad Based on my research, these browsers now substitute Courier New for Courier by default: IE9+ Chrome 2+ Firefox 10+ Safari 3.1+ iDevices Note 2: The default "font-family: monospace;" renders as Courier New in every browser I've tested, from IE6 through the latest iPhone/iPad devices. EDIT: One exception is Opera 12, which renders Consolas on Win. Opera 10 renders Courier New. Note 3: I've noticed that Courier refuses to render with any font smoothing (anti-aliasing) in any browser I've tested, regardless of system and/or browser display settings. Probably because it's an old bitmap font. This could be because of my system setup, however.

    Read the article

  • Help Creating a Google Analytics Funnel for Check out process

    - by Drew
    have a funnel question. I am currently working on tracking (through GA) guest and logged in member activity once they get to my sites shopping cart. But need help with setting up funnels. Specifically to see; Total sales Logged in member total sales List item Guest member sales The urls associated to the check out proces are: Logged in members /cart (arriving to checkout) /checkout (checking out as a logged in member) /checkout/confirmation (thank you - confirmed sale) Guest members - /cart (arriving to checkout) - /checkout-guest (checking out as a guest) - /checkout/confirmation (thanks you - confirmed sale) I've tested the funnels set up for the above with 9 transactions. But the end maths doesn't seem to line up. Total sales funnel shows 9 completed transactions when only tracking these to urls: - /cart - /checkout/confirmation Which is great - cause it's working Logged in member sales show a total of 9 completed transactions based on each step of the logged in url steps (above) being tracked in a funnel. Not good because this number should be 3. Guest check out funnel (see guest steps above) shows 9 as well. What the?!?!?!? The results I am looking for should reflect the following - total sales = 9, logged in members = 3, guest members = 6 Is there any way to set these urls up so that the funnels report the correct results - or do I need to changed the urls and provide logged in members and guest stand alone purchase confirmation pages (this would mean I can not track total sales which combine results from both streams)? Any knowledge in this area is welcome. Thanks.

    Read the article

  • How to rotate html5 canvas as page background?

    - by Sebastian P.R. Gingter
    Hi, I want to achieve the following: Image a white sheet of paper on a black desk. Then rotate the paper a little bit to the left (like, 25 degrees). Now you still have the black desk, and a rotated white box on it. In this rotated white box I want to place non-rotated normal html content like text, tables, div's etc. I already have a problem at the very first step: rotating a rectangle. This is my code so far: <html> <head> <script> function draw() { var canvas=document.getElementById("myCanvas"); var c=canvas.getContext("2d"); c.fillStyle = '#00'; c.fillRect(100, 100, 100, 100); c.rotate(20); c.fillStyle = '#ff0000'; c.fillRect(150, 150, 10, 10); } </script> </head> <body onload="draw()"> <canvas id="myCanvas" width="500" height="500"></canvas> </body> </html> With this, I see only a normal black box. Nothing else. I assume there should be a red, rotated box too, but there's nothing. What is the best approach to reach this and to have it as a (scaling) background for my web page?

    Read the article

  • Goal completions 10x higher in dashboard

    - by cjk
    I have the following table in my Dashboard: Page path level 1 Visits Goal Completions ----------------- ------ ---------------- /sub1/ 994 1,295 / 102 3 /sub2/ 10 1 I know my conversion rate is 10-20%, and that actually in this period I only had 183 goal completions under /sub1/. My goal is set as a regular expression for a particular page (/success?.*), and I have a funnel set up which tracks the page before the goal (/action). The actual urls hit would be /sub1/action then /sub1/success?1234 and /sub2/action then /sub2/success?1234. Why is my table in my dashboard giving me wildly wrong numbers? Have I done something wrong?

    Read the article

  • How to resolve "Google can't find your site's robots.txt" error?

    - by Manivasagam
    I've recently found that "Google can't find your site's robots.txt" in crawl errors. When I tried Fetching as Google, I got result "SUCCESS", then I tried looking at crawl errors and it still shows "Google can't find your site's robots.txt". What can I do to resolve this issue? Before this issue arose, my site was indexed within a few mintues, but now I find that it took time to be indexed in Google's search. When I access http://mydomain.com/robots.txt, it shows the data below: User-agent: *Disallow: /wp-admin/ Disallow: /wp-includes/ I found Blocked URLs = 0, also no any other errors. Is there any other thing I need to change? Or what could be the solution for this? Any help would be appreciated.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >