Search Results

Search found 37539 results on 1502 pages for 'google friend connect'.

Page 243/1502 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Meta description not displaying in custom site search results page

    - by Stephen Connolly
    We have Google Custom Site Search implemented on our company website. When I'm looking at the results page, I noticed that the Meta Description is not being displayed. It just seems to be reading the links titles from our drop down menu and using this as a description. When I search for the same page via google.com, the meta description is pulled in correctly. Any thoughts why this might be happening. I can't see anything in the Custom Site Search settings.

    Read the article

  • GEO Tool - commercial use

    - by Jens
    what i want to do: offer my clients the possibility to display their store as a small graphic (like google maps). just a small png with the location of the store. only the specific client is able to see this images and he pays for this area (not only for that :) ). So i'm looking for a api or whatever else to geocode(?) the adress and render a small (280*160px) image. google offers a premium licence (not cheap) ms bing openstreetmap - i'm not able to understand the 10^10 licences ;( any ideas?

    Read the article

  • Can't load some domain

    - by Grzegorz
    I can't load some domain on my ubuntu 13.04: youtube.com translate.google.com drive.google.com vk.com When I ping them they are responding: PING youtube.com (217.119.79.59) 56(84) bytes of data. 64 bytes from non-registered.plix.pl (217.119.79.59): icmp_req=1 ttl=59 time=21.9 ms 64 bytes from non-registered.plix.pl (217.119.79.59): icmp_req=2 ttl=59 time=20.5 ms 64 bytes from non-registered.plix.pl (217.119.79.59): icmp_req=3 ttl=59 time=21.3 ms --- youtube.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 20.544/21.255/21.914/0.560 ms But they don't load in any browser like firefox, chrome, lynx. They load in my media centre on XBMCbuntu.

    Read the article

  • my domain is well indexed just in my country

    - by ali
    This is my domain : http://yon.ir and it mainly should be shown in search results with the keyword "????? ????? ????" in country of Iran it is shown in google search results with rank 13 (with that keyword) which is logical but with IPs of other countries , it's not shown even in 10 first google result pages. (but the domain is indexed and when I search the whole domain title, it shows up my site at first) it's about 10 days with this manner and the domain is not new .(it was working with its previous owner before) So , what's the problem here?

    Read the article

  • Can I find out the number of searches on a given keyword, per state?

    - by Philippe
    I know that Google tells you how many times a certain keyword is used in a search. You can use the Google Keyword Tool for that. This tool also allows you to find out the number of "local" searches: this is the number of times a person from a given country searches for this keyword. My questions: can you also find out how many searches originate from a given American state ? In the Keyword Tool, I can only select countries, not states. Any other systems I can use to determine where people are searching for a given keyword?

    Read the article

  • TTS on App Engine

    - by yati sagade
    I have written a small front-end to the Festival TTS system using Python/Django. I wish to deploy it on the Google App Engine cloud. A few questions: My application uses the Festival app 'text2wave'. Will is work on the cloud? I have used Python primitives like subprocess.call() to invoke the aforementioned program. Will that work? If your answer to any or both of (1) and (2) is no, is there a free api on the web that I can use (from the appengine)? I read somewhere about placing calls from Phono to a Voxeo backend, but I'm not sure what that means. I am aware of the Google Translate extension that allows translation using an HTTP GET (REST) request, but here the text is limited to 100 chars. Bad. Plus, they may take it down any point of time.

    Read the article

  • hostapd running on Ubuntu Server 13.04 only allows single station to connect when using wpa

    - by user450688
    Problem Only a single station can connect to hostapd at a time. Any single station can connect (W8, OSX, iOS, Nexus) but when two or more hosts are connected at the same time the first client loses its connectivity. However there are no connectivity issues when WPA is not used. Setup Linux (Ubuntu server 13.04) wireless router (with separate networks for wired WAN, wired LAN, and Wireless LAN. iptables-save output: *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.0.0/24 -o p4p1 -j MASQUERADE -A POSTROUTING -s 10.0.1.0/24 -o p4p1 -j MASQUERADE COMMIT *mangle :PREROUTING ACCEPT [13:916] :INPUT ACCEPT [9:708] :FORWARD ACCEPT [4:208] :OUTPUT ACCEPT [9:3492] :POSTROUTING ACCEPT [13:3700] COMMIT *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [9:3492] -A INPUT -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i p4p1 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i wlan0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A FORWARD -i p4p1 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i eth0 -j ACCEPT -A FORWARD -i wlan0 -j ACCEPT -A FORWARD -i lo -j ACCEPT COMMIT /etc/hostapd/hostapd.conf #Wireless Interface interface=wlan0 driver=nl80211 ssid=<removed> hw_mode=g channel=6 max_num_sta=15 auth_algs=3 ieee80211n=1 wmm_enabled=1 wme_enabled=1 #Configure Hardware Capabilities of Interface ht_capab=[HT40+][SMPS-STATIC][GF][SHORT-GI-20][SHORT-GI-40][RX-STBC12] #Accept all MAC address macaddr_acl=0 #Shared Key Authentication wpa=1 wpa_passphrase=<removed> wpa_key_mgmt=WPA-PSK wpa_pairwise=CCMP rsn_pairwise=CCMP ###IPad Connectivevity Repair ieee8021x=0 eap_server=0 Wireless Card #lshw output product: RT2790 Wireless 802.11n 1T/2R PCIe vendor: Ralink corp. physical id: 0 bus info: pci@0000:03:00.0 logical name: mon.wlan0 version: 00 serial: <removed> width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list logical wireless ethernet physical configuration: broadcast=yes driver=rt2800pci driverversion=3.8.0-25-generic firmware=0.34 ip=10.0.1.254 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn #iw list output Band 1: Capabilities: 0x272 HT20/HT40 Static SM Power Save RX Greenfield RX HT20 SGI RX HT40 SGI RX STBC 2-streams Max AMSDU length: 3839 bytes No DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 2 usec (0x04) HT RX MCS rate indexes supported: 0-15, 32 TX unequal modulation not supported HT TX Max spatial streams: 1 HT TX MCS rate indexes supported may differ Frequencies: * 2412 MHz [1] (27.0 dBm) * 2417 MHz [2] (27.0 dBm) * 2422 MHz [3] (27.0 dBm) * 2427 MHz [4] (27.0 dBm) * 2432 MHz [5] (27.0 dBm) * 2437 MHz [6] (27.0 dBm) * 2442 MHz [7] (27.0 dBm) * 2447 MHz [8] (27.0 dBm) * 2452 MHz [9] (27.0 dBm) * 2457 MHz [10] (27.0 dBm) * 2462 MHz [11] (27.0 dBm) * 2467 MHz [12] (disabled) * 2472 MHz [13] (disabled) * 2484 MHz [14] (disabled) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) Available Antennas: TX 0 RX 0 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point software interface modes (can always be added): * AP/VLAN * monitor valid interface combinations: * #{ AP } <= 8, total <= 8, #channels <= 1 Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * set_tx_bitrate_mask * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * Unknown command (84) * Unknown command (87) * Unknown command (85) * Unknown command (89) * Unknown command (92) * testmode * connect * disconnect Supported TX frame types: * IBSS: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * managed: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * AP/VLAN: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * mesh point: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-client: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * P2P-GO: 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 * Unknown mode (10): 0x00 0x10 0x20 0x30 0x40 0x50 0x60 0x70 0x80 0x90 0xa0 0xb0 0xc0 0xd0 0xe0 0xf0 Supported RX frame types: * IBSS: 0x40 0xb0 0xc0 0xd0 * managed: 0x40 0xd0 * AP: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * AP/VLAN: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * mesh point: 0xb0 0xc0 0xd0 * P2P-client: 0x40 0xd0 * P2P-GO: 0x00 0x20 0x40 0xa0 0xb0 0xc0 0xd0 * Unknown mode (10): 0x40 0xd0 Device supports RSN-IBSS. HT Capability overrides: * MCS: ff ff ff ff ff ff ff ff ff ff * maximum A-MSDU length * supported channel width * short GI for 40 MHz * max A-MPDU length exponent * min MPDU start spacing Device supports TX status socket option. Device supports HT-IBSS.

    Read the article

  • Sharing banner on 3rd party websites, concerned about limited resources

    - by Omne
    I've made a banner for my website and I'm planning to ask my followers to share it on their website to help improve my rank. my website is hosted on GAE, the banners are less than 5kb/each and I must say that I don't want to pay for extra bandwidth I've read the Google App Engine Quotas but honestly I don't understand anything of it. Would you please tell me which table/data in this page should be of my concern? Also, do you think it's wise to host such banners, that are going to end up on 3rd party websites, on the GAE? or am I more secure if I use free online services like Google Picasa?

    Read the article

  • Phishing alert but file never existed

    - by IMB
    I got an alert from Google Webmasters. They say the following file was present in my host: example.com/~jhostgop/identity.php I checked my files and it never existed at all. I've experience this problem in two different host and domains but the file never existed in my file system. It appears somebody out there is linking a random domain and it prefixes the link with /~jhostgop/identity.php. Now Google may have indexed them so now I get those false phishing alerts. Anyone experienced this? Is it possible to prevent this?

    Read the article

  • Moving one site in Webmaster Tools to more then one site

    - by Towhid
    I have a Question and Answer site about immigration. now I divided it into 2 sites: mysite.co.uk about immigration to UK mysite.com with sub domains for every country, Like: australia.mysite.com , sweden.mysite.com , ... now I had moved All the content from my first site into .co.uk and .com site and it's sub domains to fill theme. I now that Google will detect my new 2 sites as duplicate of first on and it is very bad for SEO. and I don't think Google webmaster tools has a tool for it. so Please Guide me how to fix this problem.

    Read the article

  • Sharing banners on 3rd party websites, concerned about limited resources on on server side

    - by Omne
    I've made a banner for my website and I'm planning to ask my followers to share it on their website to help improve my rank. my website is hosted on GAE, the banners are less than 5kb/each and I must say that I don't want to pay for extra bandwidth I've read the Google App Engine Quotas but honestly I don't understand anything of it. Would you please tell me which table/data in this page should be of my concern? Also, do you think it's wise to host such banners, that are going to end up on 3rd party websites, on the GAE? or am I more secure if I use free online services like Google Picasa?

    Read the article

  • Create a filter to consider http://example.com/foo/bar as http://example.com/index.php/foo/bar

    - by magnetik
    I'm using URL rewriting to make my url http://example.com/foo/bar/ to http://example.com/index.php/foo/bar. I'm not linking the index.php/.. url anywhere, but for some reasons, some users arrives to the index.php url. In Google analytics, I have a lot of duplicates that are quite annoying to follow up the traffic. I've watched the Advanced filters but I'm struggling to make it works fine. Any regex and google analytics pro to help me out ?

    Read the article

  • Dealing with blackhat SEO companies and low quality link building competitors [closed]

    - by Mikko Ohtamaa
    I have often faced a case where the competitors of my client use SEO blackhat tactics where they contact a SEO company to do link building for their websites and products. Here is an example of a typical case of a fake blog created only for link building purposes A very low content article http://marshallfab.com/fundus-camera-explained.html in obvious fake blog: no author information, partially machine generated text, all blog posts are solely about link building Following the link you get to the promoted company page http://www.patternless.com/ ... which, unsurprisingly, links the SEO company homepage in the footer text http://www.affordableseofl.com/ ... who are not shy to advertise their Extremely aggressive SEO plan Does Google have any feedback channel where one could submit cases like this, so that Google would punish the link builders? Are there any means to bring these blackhat companies to pushame to damage their reputation?

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • Page load speeds effect on crawl rate

    - by Sam Pegler
    We've noticed a big drop in the total pages crawled per day on our site, we have no control over the crawl rate in google webmaster tools so it's possible this has been changed by google. However it's a fairly large site and I wouldn't of thought that the crawl rate would've been decreased. What we have noticed though is a sizeable increase in page load times, in my mind this would be the cause. Can anyone else confirm if the crawl rate is directly correlated to page load time? Seems logical, longer page load time, less pages crawled. Any decent documentation on this would be appreciated, I don't normally have any input on SEO so this is new to me.

    Read the article

  • Canonical links for huge websites

    - by Florin
    Let's say I have 5 products that are identical but the product code, the product color specifications and the product image. The title, meta and description are identical (by the way the color is in a select form). I made 4 products link canonical to the 1 that is the master based on many factors. If the master becomes inactive or without a stock one product from the other 4 will become the new master and the rest will become canonical to it. The question is if that by becomeing master from canonical will the site suffer a penalty from Google or it will work just fine? What will Google think about this strategy?

    Read the article

  • Can't connect to samba using openVPN

    - by Arthur
    I'm fairly new to using VPN. For a home project I'm running a OpenVPN server. This server runs within a network 192.168.2.0 and subnet 255.255.255.0 I can connect to this net work using the ip range 5.5.0.0 I guess the subnet is 255.255.255.192, but I'm not really sure about that. When connecting to my VPN network I can access the server via 5.5.0.1 and I can see the samba shares created on that machine. However I'm not allowed to connect to the samba share. When I look at the samba log of the computer which tries to connect I can see these messages: lib/access.c:338(allow_access) Denied connection from 5.5.0.132 (5.5.0.132) These are the share definition in /etc/samba/smb.conf interfaces = 192.168.2.0/32 5.5.0.0/24 security = user # wins-support = no # wins-server = w.x.y.z. // A LOT OF MORE SETTINGS AND COMMENTS hosts allow = 127.0.0.1 192.168.2.0/24 5.5.0.132/24 hosts deny = 0.0.0.0/0 browseable = yes path = [path to share] directory mask = 0755 force create mode = 0755 valid users = [a valid user, which i use to login with] writeable = yes force group = [the group i force to write with] force user = [the user i force to write with] This is the output of the ifconfig command as0t0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:5.5.0.1 P-t-P:5.5.0.1 Mask:255.255.255.192 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:200 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) as0t1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:5.5.0.65 P-t-P:5.5.0.65 Mask:255.255.255.192 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:200 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) as0t2 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:5.5.0.129 P-t-P:5.5.0.129 Mask:255.255.255.192 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:xxxx errors:0 dropped:0 overruns:0 frame:0 TX packets:xxxx errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:200 RX bytes:xxxx (xxxx MB) TX bytes:12403514 (xxxx MB) as0t3 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:5.5.0.193 P-t-P:5.5.0.193 Mask:255.255.255.192 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:7041 errors:0 dropped:0 overruns:0 frame:0 TX packets:9797 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:200 RX bytes:xxxx (xxxx KB) TX bytes:xxxx (xxxx MB) eth1 Link encap:Ethernet HWaddr 00:0e:2e:61:78:21 inet addr:192.168.2.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: xxxx:xxxx:xxxx:xxxx:7821/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:xxxx errors:0 dropped:0 overruns:0 frame:0 TX packets:xxxx errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:xxxx (xxxx MB) TX bytes:xxxx (xxxx MB) Interrupt:16 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:xxxx errors:0 dropped:0 overruns:0 frame:0 TX packets:xxxx errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:xxxx (xxxx MB) TX bytes:xxxx (xxxx MB) Can anyone tell me what is going wrong? My server is running Ubuntu 12.04 LTS

    Read the article

  • How to handle multiple domains correctly? [closed]

    - by Eric Itzhak
    Possible Duplicate: Could I buy a domain name to increase traffic to my site like this? I have a website with multiple keyword based domains ( 2 actully) Now both domains are common google searches for the topic. What i'm intrested is to handle the domains as such that google can recognize the seconed domain with the same page rank as the primary domain, so it will also appear on the first page. My question is how do i do it correctly so i will help SEO? meaning i want the 2nd domain to be on the first page, because the primary is on the first page for it's keyword. do i simple put a redirect in the index.php file? Or to do a 301 redirect? or change the .htaccess file? Or create a domain pointer in the Control panel?

    Read the article

  • Is there a weight for html link?

    - by Questions
    I would like to know if there is any weight associated with a html link (its backlink) when google does crawling/indexing. Will 1,2 & 3 be ever considered as a backlink by Google? 1. <a href="xyz.com">1</a> // one character link 2. <a href="xyz.com"> </a> //blank space link 3. <a href="xyz.com">!</a> //special character link 4. <a href="xyz.com">keyword</a> //meaningful word link I hope my question is understandable and guess this is right forum. I don't know to put it in other words. Thanks in advance.

    Read the article

  • How to troubleshoot problem with OpenVPN Appliance Server not able to connect

    - by Peter
    1) I have a Windows Server 2008 Standard SP2 2) I am running Hyper-V and have the OpenVPN Appliance Server virtual running 3) I have configured it as it said, only issue was that the legacy network adapter does not have a setting the instructions mention "Enable spoofing of MAC Addresses". My understand is that before R2, this was on by default. 4) Server is running, web interfaces look good 5) I am trying to connect from a Vista 64 box and cannot 5a) If I set to UPD I am stuck at Authorizing and client log looks like: 10/11/09 15:00:42: INFO: OvpnConfig: connect... 10/11/09 15:00:42: INFO: Gui listen socket at 34567 10/11/09 15:00:42: INFO: sending start command to instantiator... 10/11/09 15:00:42: INFO: start 34567 ?C:\Users\Peter\AppData\Roaming\OpenVPNTech\config?02369512D0C82A04B88093022DA0226202218022A902264022AE022B? 10/11/09 15:00:42: INFO: Got line from MI->>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 5b) I setup server for tcp and try to connect, I get a loop of authorizing and reconnecting. Log looks like: 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 10/11/09 15:00:54: INFO: Got line from MI->>BYTECOUNT:2560,3888 ... Is there anyway to turn on robust logging on the server to understand what is happening? Any ideas on how to hunt this down?

    Read the article

  • How to include high resolution version of thumbnail

    - by neak
    I'm running a tube site that has a thumbnail for each post/video (running via Wordpress, 500k posts). When I first created the thumbnails they were a fairly large size, around 640px in width each and because of that I was seeing a lot of traffic from Google Images. After streamlining the site and resizing all of the thumbnails down to 170px rather than scaling them down I'm worried that Google isn't going to rank the images as high as they would be at a larger resolution, so is there a way to include the higher res versions and serve them to be indexed instead of the smaller ones?

    Read the article

  • webmaster tools 500 crawl error for asp faceted navigation that does not exist

    - by user19007
    i am getting 2,500 type 500 url errors in google webmaster tools. These pages are faceted navigation results that can not be reached by a site visitor. These pages do not exist. We are using faceted navigation with the Volusion platform (asp.net, I think). I have specified url parameters in webmaster tools so that google will not try to index anything faceted. This does not stop the errors from generating. I am concerned about how this might effect seo (bleeding page rank). I can provide additional information if needed. I am not sure how to solve this. I have started down the path of creating 301's, but having some difficulty there as well.

    Read the article

  • Cannot connect to website - SSL handshaking fails

    - by ravenspoint
    So I cannot connect to certain websites. Just a few, most are OK. The one I really care about is paypal.com. I have done the usual things. Let's see: Checked my etc/hosts Flushed the DNS cache Checked firewall Switched on & off virus protection Switched on and off ad blocking pinged the sites Eventually, I decided to look at what curl is saying in detail == Info: About to connect() to www.paypal.com port 443 (#0) == Info: Trying 66.211.169.2... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 110 bytes (0x6e) 0000: 01 00 00 6a 03 01 4f 6c aa 8c 57 2b 3d 1e 74 64 ...j..Ol..W+=.td 0010: c1 27 25 a5 3a 12 7f 3f 41 0a 17 15 2e c9 67 7c .'%.:.?A.....g| 0020: b3 e1 f6 9a db a9 00 00 2a 00 39 00 38 00 35 00 ........*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 17 00 00 00 13 00 11 00 00 0e ................ 0060: 77 77 77 2e 70 61 79 70 61 6c 2e 63 6f 6d www.paypal.com (hangs here for ever) This looks to me like paypal is refusing to reply to the first SSL handshake. I don't know much about SSL, but compaing to the output from a site that works for me seems to make it obvious == Info: About to connect() to www.cibc.com port 443 (#0) == Info: Trying 159.231.80.200... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 108 bytes (0x6c) 0000: 01 00 00 68 03 01 4f 6c ad 6a 1f 67 d5 84 c4 4b ...h..Ol.j.g...K 0010: 0d 49 ae d6 b9 5b c3 63 f9 48 aa 18 da 43 d1 32 .I...[.c.H...C.2 0020: 47 ae 17 e5 cd e9 00 00 2a 00 39 00 38 00 35 00 G.......*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 15 00 00 00 11 00 0f 00 00 0c ................ 0060: 77 77 77 2e 63 69 62 63 2e 63 6f 6d www.cibc.com == Info: SSLv3, TLS handshake, Server hello (2): <= Recv SSL data, 74 bytes (0x4a) 0000: 02 00 00 46 03 01 00 00 58 cf 26 e2 e1 65 db 11 ...F....X.&..e.. 0010: bc 6f 26 7b 3b 6d eb 14 5f ad 47 dd 86 ea 4d a3 .o&{;m.._.G...M. 0020: fb 9f b7 2a 54 3e 20 5f 6b 04 5a 12 38 64 5d 18 ...*T> _k.Z.8d]. 0030: 65 9e e9 cd 61 eb 91 c1 16 25 61 30 bb 08 2a 78 e...a....%a0..*x 0040: b8 ee b8 7e f2 65 6a 00 04 00 ...~.ej... == Info: SSLv3, TLS handshake, CERT (11): ... and so on - working nicely eventually get some nice HTML Now I am reaaly stuck. This has been going on for five days, so I am pretty sure that the problem is not with paypal. But what on my system could be interfering with the SSL handshaking done by curl with this particular site? I suppose I could not be offering any certificates that PayPal accepts, but wouldn't I get a reply telling me so, or at least giving an error?

    Read the article

  • Using an old penalized domain for a new website

    - by MiladSafaei
    I had a website with 2 domains like these: firstdomain.com and first-domain.com. The main domain was first-domain.com and the other one was 301 redirected to first one. The main domain got a Google Penguin penalty some months ago. I uploaded the site on an new domain and removed Google index of old domain by using the remove URL tool in Webmaster Tools. Now, I want to use firstdomain.com (which was redirected to the penalized domain) for a new and fresh website with new and perfect content. Is it probable that history of this domain affects the new website and harms its ranking?

    Read the article

  • Will my site containing duplicate content be accepted in Adsense

    - by user5858
    I've a new site just over 6 months with 50 unique visitors daily. It has good amount of duplicate pages which are not copyrighted. For example I've copied related companies product FAQ's "as is" in the site. Moreover I'm not supposed to modify a company's product's faqs. I fear my login may be banned by Adsense if I submit it. So I want to know: 1) Whether I can submit it for Adsense account 2) Whether Google can penalize me and in what way 3) How would Google come to know that the duplicate content on my site is not copyrighted?

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >