Search Results

Search found 23359 results on 935 pages for 'google widget'.

Page 475/935 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • Compiling to a binary from source

    - by Chords
    I'm using WKHTMLTOPDF on a 64-bit Linux server and I'm running into problems with the version. Seen here: http://code.google.com/p/wkhtmltopdf/downloads/list There's slim pickins when it comes to pre-compiled binaries. I started with version 0.9.9 which has a few bugs. I upgraded to 0.11.0 RC 1 to find a slew of new problems, namely the following: http://code.google.com/p/wkhtmltopdf/issues/detail?id=730 I think 0.10 RC 2 would work, and the thread above suggests compiling from the source has a fix for the error I'm getting, but I don't know how to do that. Can anyone explain how I can create a static binary myself, or would anyone be willing to create and post one for the countless people waiting for this fix?

    Read the article

  • How to statically configure DNS servers on a Cisco router when the WAN interface uses DHCP?

    - by Massimo
    I have a Cisco router (model 887VA, IOS 15.4) used to connect a LAN to the Internet via ADSL. The WAN interface uses DHCP: interface ATM0.1 point-to-point ip address dhcp I need the router to use a statically-defined DNS server for name resolution: ip name-server A.B.C.D However, the router insists on using the DNS servers supplied by the ISP via DHCP: Router#ping www.google.com Translating "www.google.com"...domain server (<ISP DNS>) [OK] Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 173.194.116.208, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 44/45/48 ms How can I tell the router to ignore the ISP-supplied DNS servers and only use the statically-configured one?

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Squid/Kerberos authentication with only Linux

    - by user28362
    Hi, I would like to know if it possible to let a Windows Xp machine authenticate to Squid (Linux) using Kerberos without the need of an Active Directory domain. I only want to create a Kerberos ticket on the client side, which should give the client access to squid (using I.E.). I only found tutorials about configuring A.D./Squid, not an environment with only Linux servers. Thanks Update: The kerberos setup is correctly done, the proxy and client can get tickets. As for the browser (FF/IE), I get: ERROR Cache Access Denied While trying to retrieve the URL: http://www.google.com/ The following error was encountered: * Cache Access Denied. Sorry, you are not currently allowed to request: http://www.google.com/ from this cache until you have authenticated yourself. In kerberos, I get: squid_kerb_auth: Got 'YR ElRNTVMTUABBAABAB4IIogAAAAAAAAAAAAAAAAAAAAAFASgDAAAADw==' from squid (length: 59). squid_kerb_auth: parseNegTokenInit failed with rc=101 squid_kerb_auth: received type 1 NTLM token This message is strange, as I didn't configure NTLM. It looks like the browser uses the wrong authentication methode.

    Read the article

  • mystery Internet traffic to port 445

    - by Ben Collver
    Recently, I noticed traffic from the office network to TCP port 445 on the Internet [a]. Below are the Linux firewall log entries to Facebook's network [b] and Google's network [c]. I would like to identify the source of this traffic. My first guess is that Facebook and Google might be using multiple TCP ports for SSL load balancing. However, I could not confirm this based on the web proxy logs. What else might it be? [a] http://support.microsoft.com/kb/204279 [b] Sep 4 08:30:03 firewall01 kernel: IN=eth0 OUT=eth2 SRC=10.0.0.131 DST=69.171.237.34 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=14287 DF PROTO=TCP SPT=51711 DPT=445 WINDOW=8192 RES=0x00 SYN URGP=0 [c] Aug 28 06:02:41 firewall01 kernel: IN=eth0 OUT=eth2 SRC=10.0.0.115 DST=173.194.33.47 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=4558 DF PROTO=TCP SPT=49294 DPT=445 WINDOW=8192 RES=0x00 SYN URGP=0

    Read the article

  • My website is infected, I restored a backup of the uninfected files, how long will it take to un-mark as dangerous?

    - by Cyclone
    My website www.sagamountain.com was recently infected by a malware distributor (or at least I think it may have been). I have removed all external content, google ads, firefly chat, etc. I uploaded a backup from a few weeks ago, when there was no issue. I patched the SQL injection hole. Now, how long will it take to unmark it as dangerous? Where can I contact google? I am not sure if this is the right place to post it, but since it may have been a server issue I may as well. Can sites inject base64 code via a virus on the whole server, or is it only via sql injection? Thanks for the help, viruses freak me out. Is there an online virus scanner that can scan my page and tell me what is wrong?

    Read the article

  • OS X is based on the codex Gigas the devils bible: Android OS n 4.4 Dio Ra Egyptian deity why is that?

    - by user215250
    GUESS WHO? The Internet and all computers are based on a mathematical number system that seems to be 3, 4, 6, 8 and 10. The HTTP is 888P and UTF-8 and Windows 8 and 8 gigas of RAM(88) on a 64biTOS X-10 or Aten(satan) why is it allowed to be so evil and who all knows about it? Is this activity illegal and should I sew these companies for being involved in satanic practices? iC3 iC3 iC3 I do see XP (X) Chi and (P) Rho a monogram and symbol for Christ, consisting of the superimposed Greek letters. The X is ten, The X code, OS X(O Satan) and codex The Gigas the Devil bible and the P is Payne, The House of Payne in which God dwells. Windows 8, Google Android and Apples OS X are the foundation on which we operarte on the Internet and our Mobiles devices. What is it that these 3 companies have chosen to base their OS’s on such evil? Windows 8 is windows hate H8, HH and H8. Said to be the Devil. Google’s (UGLE) M the Masonic M behind Android OS.in 4.4 is Dio (R) DNA O Sin and 44 is the Devils name in Twain’s The Mysterious Stranger. Apple’s evil (i) OS X (ou-es-ten) O Satan them all beat (B8) considering Apple put their first product on the market for $666.66. The Holy Grail of computers they say. Your Excellency, Lord and King OS2 Eisus Uni Peg Unix: The Unicorn Pegasus Jesus Christ

    Read the article

  • Is it needed to have your blog title and description in H1 and H2

    - by Saif Bechan
    I have read an article that states that it is not necessary to have your blog title and description on your website at all. Just have the titles of the posts in h1, on the index and the post page. And on the post page have your different sections started with h2. Widget headers start with h3. Title and description are most of the time in the logo image. I have looked at the source of my favorite blog, http://net.tutsplus.com, and I see they do the same. Is this recommended?

    Read the article

  • Resources on concepts/theory behind GUI development?

    - by ShrimpCrackers
    I was wondering if there were any resources that explain concepts/theory behind GUI development. I don't mean a resource that explains how to use a GUI library, but rather how to create your own widgets. For example a resource that explains different methods on how to implement scrollable listboxes. I ask because I have an idea for a game tool where I would like to create my own widgets and let users drag and drop them onto some kind of form. How do GUI libraries usually draw widgets? I'm not sure if reskinning widgets from a GUI library fits my needs, since widget behavior needs to be dynamic based on user interaction.

    Read the article

  • Visage

    - by Geertjan
    Raj, the Chennai JUG lead, together with others from that JUG, is interested in Visage, the JavaFX script language closely associated with Stephen Chin. He sent me the related lexer and parser and I started by having a look at them in the new version of ANTLRWorks being developed by Sam Harwell (who demonstrated it very effectively during JavaOne): Notice how the lexer and parser are shown in a tree structure, as well as in a cool syntax diagram. Next, I downloaded a bunch of JARs from here, so that packages such as from "com.sun.tools.mjavac" can be used, i.e., these are Visage-specific packages that aren't found anywhere except in the location below: http://code.google.com/p/visage/wiki/GettingStarted It turns out that there's also a Visage NetBeans plugin out there: http://code.google.com/p/visage/source/browse/?repo=netbeans-plugin Rather than recreating everything from scratch, i.e., generating ANTLR Java classes from the lexer and parser, I copied a lot of stuff from the site above and now a file Raj sent me looks as follows, i.e., basic syntax coloring is shown: For anyone wanting to seriously support Visage in NetBeans IDE, I recommend downloading the existing Visage NetBeans plugin above, rather than creating everything yourself from scratch, and then figuring out how to use that code in some way, i.e., add the JARs I pointed to above, and work on its build.xml file, which could be frustrating in the beginning, but there's no point in recreating everything if everything already exists.

    Read the article

  • How to Enable User-Specific Wireless Networks in Windows 7

    - by The Geek
    Wireless network settings in Windows 7 are global across all users, but there’s a little-known option that lets you switch them to per-user, so each user has access to only the networks they are allowed to connect to. Here’s how it all works. How is this useful? Maybe you want to prevent a particular user from accessing the internet—if you don’t give them the wireless password, they won’t be able to get online. This could be very useful if you’ve got mini-people playing games on the family PC, but you don’t want them getting online Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • Are there videocamera which geotag individual frames?

    - by Grzegorz Adam Hankiewicz
    I'm looking for a way to record live video with the specific requirement of having each frame georeferenced with GPS. Right now I'm using a normal video camera with a PDA+GPS that records the position, but it's difficult to sync both of these plus sometimes I've forgotten to turn the PDA+GPS or it has failed for some reason and all my video has been useless. Using google I found that about two years ago a company named Seero produced such video cameras and software, but apparently the domain doesn't exist any more and I only find references of other pages mentioning it. Does somebody know of any other product? I need to record this video in HD and have some way to export to Google Maps or other GIS software the positions of the frames in a way that I can click on the map and see what was being recorded in the video at that point. The precission of the GPS tracking is good enough as one position per second, intermediate frames of the video stream can be interpolated.

    Read the article

  • Cannot connect to internet by Internet Explorer

    - by user428368
    I was using Mozilla Firefox for browsing and it workes Ok. But now when I want to open any web site by using IE8 it says Internet Explorer cannot display the webpage (and Mozilla still works). I came to this problem because I wanted to install Google Earth, but it says that the program cannot access the server .... so Google sugested me to open one of three links in IE .... and it doesn't work. By the way I'm connected to Internet via LAN, but in IE, Internet Options, Connections there is nothing, not a single conection. People, please help....

    Read the article

  • connect to ssh server thru 80 via HTTP proxy?

    - by im_chc
    Hi, Please help: I want to connect to my ssh server at home However, I'm behind a corporate (CORP) firewall, which blocks almost all ports (443, 22, 23 etc). But it seems that 80 is not blocked, coz I am able to surf the web after I login (i.e. IE sets to CORP's proxy server, and start IE - displayed CORP intranet portal - type in google.com - dialog pops up for userid + pwd - login successful, and surf without restrictions) My ssh server listens at 443. My question is: Is there a way to connect from a computer behind the CORP firewall to the ssh server thru the 80 port, with the ssh server still listening on port 443? Changing the ssh server to listen to port 80 is not an option, coz my home ISP blocks 80. Can I use a public proxy which listens at 80? After some research on google I found that there is something called "connect to SSH thru an HTTP proxy" using the Cockscrew software. Is it useful? Or is there some other way to solve the problem?

    Read the article

  • Why do apt-get and wget fail on my server when ping is working?

    - by klox
    Yesterday my server still OK, but today after try to sudo apt-get update i got this error: update process. I try: sudo rm /var/lib/apt/lists/* -vf And got This.Then try update again, but it's not solving my problem then show May be still same error. I checked my internet connection try ping google.com, get result : PING google.com (74.125.235.40) 56(84) bytes of data. From 136.198.117.254: icmp_seq=1 Redirect Network(New nexthop: fw1.jvc-jein.co.id (136.198.117.6)) 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=1 ttl=53 time=20.6 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=2 ttl=53 time=18.2 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=3 ttl=53 time=33.0 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=4 ttl=53 time=30.0 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=5 ttl=53 time=28.1 ms In some sites said that may be it caused by getdeb server is down. try to install: jeinqa@SVRQAR:~$ sudo apt-get install pastebinit Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_restricted_binary-amd64_Packages E: The package lists or status file could not be parsed or opened. try : sudo ufw status verbose result : Status: inactive

    Read the article

  • Power cut during Ubuntu upgrade to 10.04 - boots to command line, apt-get and dpkg do not work.

    - by Macha
    I was upgrading Ubuntu to 10.04, when a tripswitch tripped, cutting power to the computer. When it was restarted, it booted into a command line prompt. Google tells me to try: sudo dpkg --configure -a This gets me a lot of output that ends with a list of packages. I can't tell you what the output is, as piping the output to more/less does not work (still just all scrolls by and moves to next prompt), and redirecting it to a file just results in an empty file. Google also suggested: sudo apt-get install -f This also didn't work. Is a fresh install the only solution at this point?

    Read the article

  • How to create robots.txt for a domain that contains international websites in subfolders?

    - by aaandre
    Hi, I am working on a site that has the following structure: site.com/us - us version site.com/uk - uk version site.com/jp - Japanese version etc. I would like to create a robots.txt that points the local search engines to a localized sitemap page and has them exclude everything else from the local listings. So, google.com (us) will index ONLY site.com/us and take in consideration site.com/us/sitemap.html google.co.uk will index only site.com/uk and site.com/uk/sitemap.html Same for the rest of the search engines, including Yahoo, Bing etc. Any idea on how to achieve this? Thank you!

    Read the article

  • How to prevent duplication of content on a page with too many filters?

    - by Vikas Gulati
    I have a webpage where a user can search for items based on around 6 filters. Currently I have the page implemented with one filter as the base filter (part of the url that would get indexed) and other filters in the form of hash urls (which won't get indexed). This way the duplication is less. Something like this example.com/filter1value-items#by-filter3-filter3value-filter2-filter2value Now as you may see, only one filter is within the reach of the search engine while the rest are hashed. This way I could have 6 pages. Now the problem is I expect users to use two filters as well at times while searching. As per my analysis using the Google Keyword Analyzer there are a fare bit of users that might use two filters in conjunction while searching. So how should I go about it? Having all the filters as part of the url would simply explode the number of pages and sticking to the current way wouldn't let me target those users. I was thinking of going with at max 2 base filters and rest as part of the hash url. But the only thing stopping me is that it would lead to duplication of content as per Google Webmaster Tool's suggestions on Url Structure.

    Read the article

  • Cheapest way to go for somebody who wants to accept payments, but won't be accepting hundreds of ord

    - by blockhead
    I have a client who lectures, and wants to sell spots to his lecture online. I would preferably like to set him up with a solution that allows me to collect billing information on his site. My experience with e-commerce is in using solutions like Authorize.net, however this does not seem cost effective since I can't imagine he's making a huge profit off of this. I'm afraid he would lose money in the cost of using Authorize.net (or any payment gateway for the matter). I could use google checkout or paypal express, but this would require me to leave his site (although with google checkout, it looks like, from a glance, that I could just submit to their form from my server, and likely with paypal as well, but I don't know if this is against their TOS). What is the most cost-effective solution for accepting credit card payments in this situation?

    Read the article

  • Ask the Readers: What Do You Have Set as Your Homepage?

    - by Mysticgeek
    When if comes to setting a homepage in your browser, it’s really based on personal preference. Today we want to know what you have set as your homepage in your favorite browser. Browser Homepage There are a lot of search sites that allow you to customize your homepage such as iGoogle, MSN, and Yahoo. Some people enjoy having a homepage set up as a dashboard of sorts. While others like simplicity and set it to Google or leave it blank. Not surprisingly in a small office or corporation you will see a lot of workstations set to MSN or the company SharePoint site. Unfortunately, a lot of free software tries to change you default homepage as well, like in this example when installing Windows Live Essentials. Make sure to avoid this by not rushing through software install wizards, and carefully opt out of such options. What is set as your homepage in your favorite web browser…both for work and at home? Leave us a comment and join in the discussion! Similar Articles Productive Geek Tips Ask the Readers: Which Web Browser Do You Use?How-To Geek Comment PolicyMysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPSet the Default Browser on Ubuntu From the Command LineAnnouncing the How-To Geek Forums TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • Why is Ubuntu offline (except torrents) while Windows is online?

    - by Fahim al Islam
    I am using a static wired connection. Everything was perfect. But suddenly from few hours back I can't access any website. Dropbox, Ubuntu One also can't connect. Ping request is also unsuccessful, but I can download through torrent. I am not trying torrent download and browsing at the same time. So, I think it's not an issue about torrent using all the bandwidth. One important point is that this connection works perfectly on Windows on this same PC (My PC is dual-boot). I have tried the way what izx has suggested (using "sudo sh -c 'echo nameserver 8.8.8.8 /etc/resolv.conf'"), but I'm facing the same problem again. Now I can't even ping 8.8.8.8 and google.com. Though I can ping 74.125.228.2 (which is Google IP address) I can't understand what's happening and why this is happening. I'm new in this website many rules and regulations is unknown to me. So, please don't be bothered for my mistakes. Looking forward for help from anyone. Thanks to all.

    Read the article

  • Which E-commerce Platform works well with Flash Product Customization+Social?

    - by Artur
    What's the best platform out there that is flexible enough to easily integrate this: Custom Flash App I would like customers to : 1 - Select a t-shirt from a gallery of artists. 2 - Customize it ( using a Flash tool i created ) 3 - Select a T-shirt size 4 - Order it. All this flash widget does is generate a JPG on the server. the ecommerce app should assign it to that Order/Customer, and add it to their shopping cart. Social Features Customers should also be able to comment on the t-shirts and artist bios. I was thinking of trying Wordpress plugins like Shopp or Getshopped or Cart66. ----- then BuddyPRess for social features. Or is Magento a better choice? thanks!

    Read the article

  • Does ssh-copy-id overwrite previous keys?

    - by decker
    I haven't yet found any definitive answer on this using google. It seems like the answer is no, but I need to know for sure before I go ahead and do it. Does ssh-copy-id append the key to authorized_keys or does it overwrite the previous keys? Thanks. Addendum: So the answer is right there in the man page. Go figure. I guess the question can at least help fellow Google-jockeys like me who get a little too used to googling and finding tutorials (that often explain things in layman's terms for us poor folks who have only used Windows our whole lives).

    Read the article

  • How can I keep websites from knowing where I live?

    - by D Connors
    This questions is related to issues and practicality, not security. I live in Brazil and, apparently, every single website I visit knows about it. Usually that's ok, but there are quite a few sites that don't make use of that information adequately. For instance: Bing keeps thinking that brazilian pages are way more relevant to me than american ones (which they're not). Google.com always redirects me to google.com.br. Microsoft automatically sends me to horribly translated support pages in portuguese (which would just be easier to read in english). These are just a few examples. Usually it's stuff I can live with (or work around), but some of them are just plain irritating. I have geolocation disabled in firefox, so I guess they're either getting this information from my IP or from windows itself (which I bought here). Is there a way to avoid this? Either tell them nothing or make them think I live somewhere else? Thanks

    Read the article

  • Reviewing an advertiser / ads

    - by Joyce Babu
    I recently got a direct ad campaign for one of my sites. The advertiser agreed to my price without much bargaining. No contract was signed, but I had specified pre payment as part of our agreement. The advertiser is not a major network. Currently they are showing affiliate ads. Some things about this deal seems fishy to me - The amount was direct deposited to our account, not transferred from their company account. - Just below the visible ad, I can see a hidden iframe which contains the flight search widget for a major airline (Could this be cookie stuffing?) - They are contacting me from a gmail account - They did not insist for a signed contact How can I ensure that the advertiser is legitimate and is not using the adslot for illegal purpose?

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >