Search Results

Search found 26801 results on 1073 pages for 'google chrome extensions'.

Page 423/1073 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • Is there an OpenID demo server out there?

    - by billpg
    Hi everyone. I'm doing some experiements with adding OpenID to something I'm working on, and I'd like to test out a few providers. Is there a server out there that will go through the OpenID login process (same way that the StackOverflow group does) and tell me all the information the provider shows. I imagine it would work like... I go to example.com and type in https://www.google.com/accounts/o8/id example.com bounces me to google. I log in. Google asks me to confirm if I allow example.com access to everything. Google bounces me back to example.com example.com tells me my OpenID, email address, anything else it's got. Does such a thing please already exist?

    Read the article

  • Wireless connection by using command screen

    - by Amadeus
    I installed Ubuntu 12.04 armhf to my beagleboard-xm and now trying to connect to a wireless network. First, I checked if I can search for available networks: ubuntu@arm:~$ iwlist scan lo Interface doesn't support scanning. usb0 Interface doesn't support scanning. wlan0 Scan completed : Cell 01 - Address: EA:7D:EF:60:C9:0B Channel:1 Frequency:2.412 GHz (Channel 1) Quality=70/70 Signal level=-23 dBm Encryption key:on ESSID:"ghostrider" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s Mode:Ad-Hoc Extra:tsf=000000005a1ab50e Extra: Last beacon: 6242ms ago IE: Unknown: 000A67686F73747269646572 IE: Unknown: 010882848B962430486C IE: Unknown: 030101 IE: Unknown: 06020000 IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: Unknown: 32040C121860 IE: Unknown: 2D1A2C181BFF00000000000000000000000000000000000000000000 IE: Unknown: 3D16010800000000FF000000000000000000000000000000 IE: Unknown: DD09001018020000000000 Then I edited /etc/network/interfaces file to the following: root@arm:/etc/wpa_supplicant# cat /etc/network/interfaces auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp # Example to keep MAC address between reboots #hwaddress ether DE:AD:BE:EF:CA:FE # WiFi Example auto wlan0 iface wlan0 inet dhcp wpa-ssid "ghostrider" wpa-psk "b34d373eb2fb836a43b0afffe783c7d0af694724506c9e77b06d1021302905bf" But I cannot still connect to the wireless network: root@arm:/etc/wpa_supplicant# iwconfig lo no wireless extensions. usb0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Asociated Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on eth0 no wireless extensions. root@arm:/etc/network# ifup wlan0 Failed to bring up wlan0. What is wrong? Should I change any other files also? But I think it was enough. By the way if you are curious about where that wpa-psk came from: zero@ghostrider:~$ wpa_passphrase ghostrider 34bddf67c2 network={ ssid="ghostrider" #psk="34bddf67c2" psk=b34d373eb2fb836a43b0afffe783c7d0af694724506c9e77b06d1021302905bf } I will appreciate any effort to help. Regards, Amadeus ps: Also I tried to connect manually: root@arm:/etc/network# iwconfig wlan0 essid ghostrider key s:34bddf67c2 But this did not solve my problem also.

    Read the article

  • Wireless problems on HP

    - by Sat93
    I'm not able to enable Wireless using the hardware switch on my HP ProBook4430s. Because of this the Enable Wireless option is greyd out and I cannot enable it. The greyd out option can be seen in the screenshot below. The results of iwconfig for my system are as follows, lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off eth0 no wireless extensions. Also I tried to do the following, sudo ifconfig wlan0 up but I got an error as below, SIOCSIFFLAGS: Operation not possible due to RF-kill Also the result of sudo rfkill list all for my system is as follows, 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 2: hp-bluetooth: Bluetooth Soft blocked: no Hard blocked: no 3: hci0: Bluetooth Soft blocked: no Hard blocked: no How do I fix this problem? Thanku!

    Read the article

  • 404s on password protected content

    - by tjb1982
    I'm new to WordPress and SEO, generally, but we've been running into problems with our site that don't seem to make sense to me. The problem is that our editor likes to schedule posts and/or mark them private until she is ready to make them public, but somehow Google is crawling these posts and getting 404s (because they are password protected). How does Google know they exist in the first place? I checked the sitemap.xml file and don't see a record of the post. One of the offending posts was marked public, but is scheduled for a future date. Could that have something to do with it? I've tried to Google the answer, and I came up with a good amount of reassurance that this won't hurt the site, but I'm still wondering how it's happening in the first place. It's hard because I don't know exactly what the editor's workflow is. Is it possible she's posting publicly first and then revising it to be private only after it's too late? Does anyone know how Google finds WordPress URLs it shouldn't have access to?

    Read the article

  • Saving multiple attachments in phpmyadmin [closed]

    - by Madiha
    I am sending multiple attachments with an Email message. I am saving the Email message and Email Address in database (i.e., phpmyadmin). Now i want to save the multiple attachments data i.e., Contents of file, Extension, Size and name of file , How can i do it. I am getting the size of file now by the following code in javascript: var size = this.files[0].size; I am a new in php, so any easy tutorials, and help you can specify.. Also i want all the Attachments (maximum 5) to be saved in one cell (i.e., FileContents) in database, same the extensions and sizes collective of all attachments in one cell (i.e., Extensions). Please any body to help

    Read the article

  • Installing Broadcom Wireless Drivers

    - by Fer1805
    I'm having serious problems installing the Broadcom drivers for Ubuntu. It worked perfectly on my previous version, but now, it is impossible. What are the steps to install Broadcom wireless drivers for a BCM43xx card? I'm a user with no advance knowledge in Linux, so I would need clear explanations on how to make, compile, etc. lspci -vnn | grep Network showed: Broadcom Corporation BCM4322 802.11a/b/g/n Wireless LAN Controller [14e4:432b] iwconfig showed: lo no wireless extensions. eth0 no wireless extensions.

    Read the article

  • what extension for uploading drawings from gimp to facebook

    - by joel
    Today I upgraded to Ubuntu 13.10 from 13.04. It looks good. I didn't test Gimp Image Editor on the 13.04 version but I just tested the 13.10 version with a painting with Gimp and when I tried to upload the file to facebook it tells me that this is an invalid image. I have an older laptop with ubuntu 12.04 and have the same Gimp installed in it. With the 12.04 version I have plenty of extensions for filing the images that I do on gimp and can upload them to facebook. I noticed that the 13.10 version has only a few extensions for saving a file from gimp. Which extension should I use to save a file that facebook will accept in this 13.10 version?

    Read the article

  • How to make XAMPP virtual hosts accessible to VM's and other computers on LAN?

    - by martin's
    XAMPP running on Vista 64 Ultimate dev machine (don't think it matters). Machine / Browser configuration Safari, Firefox, Chrome and IE9 on dev machine IE7 and IE8 on separate XP Pro VM's (VMWare on dev machine) IE10 and Chrome on Windows 8 VM (VMware on dev machine) Safari, Firefox and Chrome running on a iMac (same network as dev) Safari, Firefox and Chrome running on a couple of Mac Pro's (same network as dev) IE7, IE8, IE9 running on other PC's on the same network as dev machine Development Configuration Multiple virtual hosts for different projects .local fake TLD for development No firewall restrictions on dev machine for Apache Some sites have .htaccess mapping www to non-www Port 80 is open in the dev machine's firewall Problem XAMPP local home page (http://192.168.1.98/xampp/) can be accessed from everywhere, real or virtual, by IP All .local sites can be accessed from the browsers on the dev machine. All .local sites can be accessed form the browsers in the XP VM's. Some .local sites cannot be accessed from IE10 or Chrome on the W8 VM Sites that cannot be accessed from W8 VM have a minimal .htaccess file No .local sites can be accessed from ANY machine (PC or Mac) on the LAN hosts on dev machine (relevant excerpt) 127.0.0.1 site1.local 127.0.0.1 site2.local 127.0.0.1 site3.local 127.0.0.1 site4.local 127.0.0.1 site5.local 127.0.0.1 site6.local 127.0.0.1 site7.local 127.0.0.1 site8.local 127.0.0.1 site9.local 192.168.1.98 site1.local 192.168.1.98 site2.local 192.168.1.98 site3.local 192.168.1.98 site4.local 192.168.1.98 site5.local 192.168.1.98 site6.local 192.168.1.98 site7.local 192.168.1.98 site8.local 192.168.1.98 site9.local httpd-vhosts.conf on dev machine (relevant excerpt) NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost ServerAlias localhost *.localhost.* DocumentRoot D:/xampp/htdocs </VirtualHost> # ======================================== site1.local <VirtualHost *:80> ServerName site1.local ServerAlias site1.local *.site1.local DocumentRoot D:/xampp-sites/site1/public_html ErrorLog D:/xampp-sites/site1/logs/access.log CustomLog D:/xampp-sites/site1/logs/error.log combined <Directory D:/xampp-sites/site1> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> </VirtualHost> NOTE: The above <VirtualHost *:80> block is repeated for each of the nine virtual hosts in the file, no sense in posting it here. hosts on all VM's and physical machines on the network (relevant excerpt) 127.0.0.1 localhost ::1 localhost 192.168.1.98 site1.local 192.168.1.98 site2.local 192.168.1.98 site3.local 192.168.1.98 site4.local 192.168.1.98 site5.local 192.168.1.98 site6.local 192.168.1.98 site7.local 192.168.1.98 site8.local 192.168.1.98 site9.local None of the VM's have any firewall blocks on http traffic. They can reach any site on the real Internet. The same is true of the real machines on the network. The biggest puzzle perhaps is that the W8 VM actually DOES reach some of the virtual hosts. It does NOT reach site2, site6 and site 9, all of which have this minimal .htaccess file. .htaccess file <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] </IfModule> Adding this file to any of the virtual hosts that do work on the W8 VM will break the site (only for W8 VM, not the XP VM's) and require a cache flush on the W8 VM before it will see the site again after deleting the file. Regardless of whether a .htaccess file exists or not, no machine on the same LAN can access anything other than the XAMPP home page via IP. Even with hosts files on all machines. I can ping any virtual host from any machine on the network and get a response from the correct IP address. I can't see anything in out Netgear router that might prevent one machine from reaching the other. Besides, once the local hosts file resolves to an ip address that's all that goes out onto the local network. I've gone through an extensive number of posts on both SO and as the result of Google searches. I can't say that I have found anything definitive anywhere.

    Read the article

  • Site experiencing low traffic volume between 8AM and 4PM BST

    - by BizNuge
    There may be no definitive answer to this question but I thought peer review of the problem might stimulate some ideas on the topic. We have a boutique sales site that is experiencing low volumes of traffic (both UK and international) between 8AM and 4PM BST. This seems sort of strange since our target audience for the site is UK based, and this would seem to be when people are awake and online. We are in contact with another boutique site in the same sector who don't experience this issue, so it seems kinda strange. Later on in the day we are getting traffic from the UK, as well as a fair amount of international traffic, so I'm at a loss to figure this one out. The site is fairly well optimised including:- sitemap.xml Proper caching policies across the board google merchant dublin core microdata html5 pretty urls meta and content are reviewed as an ongoing concern we have decent sitelinks for direct queries thru google on the site name a decent amount of inbound links FB, Twitter, Google +1 Google maps listing [verified] site has been selling for ~4 months and is getting ~250 users per day. So I'm not entirely sure how to explain the mid day dip in our figures.... Any ideas at all would be useful. Cheers all!

    Read the article

  • Problem with the hosts file under windows 7

    - by martani_net
    I updated some entries in the hosts file "C:\WINDOWS\System32\drivers\etc" to make google for example point to 127.0.0.1 # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host 127.0.0.1 localhost ::1 localhost 127.0.0.1 google.com This works fine under windows Vista, but not under Widows 7. When I type google, it goes directly to Google's website. For info, I am not using a proxy server. I think there are some temporary DNS settings that must be flushed, but I don't know how, anyone knows how to fix this? Thank you.

    Read the article

  • Is my current htaccess setting hurting SEO?

    - by user656002
    I have a site that I have redirecting to https. I do this to leverage wildcard SSL for my password protected pages. Everything seems to work fine with testing. For example, whether you type in http or www, you always get redirected to the SSL https... That said, I have about 200-300 external backlinks -- many high quality, yet google webmaster (along with SEOMoz), shows I have just 4... Huh? I'm embarrassed to say I just discovered this. This has led me to hypothesize that maybe my settings in htaccess is messed up, so google isn't recognizing a link because it's recorded on another site as http, instead of https. Maybe? At any rate, here is my simple htaccess setting for 301 www to http (The https redirect must be done inside the virtual host file--I think). I don't have anything in the htaccess file for https RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Like I said, everything works fine for redirect over https, so I'd rather not screw up what works. On the other hand something is very wrong with google finding all my back links, so I need to fix something... I'm just wondering that maybe google isn't picking up a my backlinks from other websites recording me as http because I'm at https. Maybe google doesn't care and it's some other issue. Am I barking up the right tree? If so any quick fixes? Thanks as always!

    Read the article

  • Office 2013 : les détails de la version pour tablettes sous Windows RT, une déclinaison qui aura quelques limitations mais pas trop

    Office 2013 pour Windows RT serait limité en fonctionnalités Microsoft aurait supprimé le support des macros, des extensions et de VBA Microsoft avait annoncé que les tablettes ARM sur lesquelles seront exécutées Windows RT intégreront par défaut la suite bureautique Office 2013. Des sources officieuses, il semblerait que la firme aurait décidé que cette version d'Office serait dépourvue d'un certain nombre de fonctionnalités. Selon TheVerge, les fonctions comme les macros, les extensions tierces, le support de VBA et un petit nombre d'autres fonctionnalités ont été supprimées. Comme pour la version Metro d'Internet Explorer (dont les plugins ne sont pas autorisés), Micro...

    Read the article

  • What kind of redirect (301 or 302) for an email links tracker?

    - by MaxiWheat
    We are developing an email sending application ("à la" Mailchimp). Hyperlinks inserted by our users, in the emails they want to send, are replaced by a tracking URL on our application (https://ourdomain.com/trackingurl?blablabla) which then redirects the email reader to the original URL our users included in their emails. This allows us to record statistics about link clicks. Until now, we used 301 for those redirections, but we noticed that Google began indexing pages on our application which are in fact redirects to other domains. (The title and snippet in Google results are from the other domain, but the link in green is from our application). We took action by adding those urls to our robots.txt, but Google seems to take forever (months!) before removing them for its index and removing them by hand in Webmaster Tools would take a lot of time since there are lot. I would like to know which kind of HTTP redirect (301 or 302) is best suited for this kind of opreation ? Do you think switching to 302 redirects could improve this situation since we don't really want Google to index redirected links from our clients emails ?

    Read the article

  • Can't get packages after installing Faience

    - by ccrama
    I installed the Faience theme (sudo apt-get install Faience) and it installed fine. Then I tried installing another package and it said this... Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: faience : Depends: faenza but it is not going to be installed gnome-shell-extensions-user-theme : Depends: gnome-shell-extensions-common but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Please help :O!

    Read the article

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: 1. OpenDNS fails to resolve mail.google.com to its IP, 2. My ISP sends me a page showing search results for 'mail.google.com' 3. Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding 4. After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. 5. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: OpenDNS fails to resolve mail.google.com to its IP, My ISP sends me a page showing search results for 'mail.google.com' Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • Polymorphism problem: How to check type of derived class?

    - by malymato
    Hi, this is my first question here :) I know that I should not check for object type but instead use dynamic_cast, but that would not solve my problem. I have class called Extension and interfaces called IExtendable and IInitializable, IUpdatable, ILoadable, IDrawable (the last four are basicly the same). If Extension implements IExtendable interface, it can extend itself with different Extension objects. The problem is that I want to allow the Extension which implements IExtendable to extend only with Extension that implements the same interfaces as the original Extension. You probably don't unerstand that mess so I try to explain it with code: class IExtendable { public: IExtendable(void); void AddExtension(Extension*); void RemoveExtensionByID(unsigned int); vector<Extension*>* GetExtensionPtr(){return &extensions;}; private: vector<Extension*> extensions; }; class IUpdatable { public: IUpdatable(void); ~IUpdatable(void); virtual void Update(); }; class Extension { public: Extension(void); virtual ~Extension(void); void Enable(){enabled=true;}; void Disable(){enabled=false;}; unsigned int GetIndex(){return ID;}; private: bool enabled; unsigned int ID; static unsigned int _indexID; }; Now imagine the case that I create Extension like this: class MyExtension : public Extension, public IExtendable, public IUpdatable, public IDrawable { public: MyExtension(void); virtual ~MyExtension(void); virtual void AddExtension(Extension*); virtual void Update(); virtual void Draw(); }; And I want to allow this class to extend itself only with Extensions that implements the same interfaces (or less). For example I want it to be able to take Extension which implements IUpdatable; or both IUpdatable and IDrawable; but e.g. not Extension which implements ILoadable. I want to do this because when e.g. Update() will be called on some Extension which implements IExtendable and IUpdateable, it will be also called on these Extensions which extends this Extension. So when I'm adding some Extension to Extension which implements IExtendable and some of the IUpdatable, ILoadable... I'm forced to check if Extension that is going to be add implements these interfaces too. So In the IExtendable::AddExtension(Extension*) I would need to do something like this: void IExtendable::AddExtension(Extension* pEx) { bool ok = true; // check wheather this extension can take pEx // do this with every interface if ((*pEx is IUpdatable) && (*this is_not IUpdatable)) ok = false; if (ok) this->extensions.push_back(pEx); } But how? Any ideas what would be the best solution? I don't want to use dynamic_cast and see if it returns null... thanks

    Read the article

  • evaluating cost/benefits of using extension methods in C# => 3.0

    - by BillW
    Hi, In what circumstances (usage scenarios) would you choose to write an extension rather than sub-classing an object ? < full disclosure : I am not an MS employee; I do not know Mitsu Furota personally; I do know the author of the open-source Componax library mentioned here, but I have no business dealings with him whatsoever; I am not creating, or planning to create any commercial product using extensions : in sum : this post is from pure intellectal curiousity related to my trying to (continually) become aware of "best practices" I find the idea of extension methods "cool," and obviously you can do "far-out" things with them as in the many examples you can in Mitsu Furota's (MS) blog postslink text. A personal friend wrote the open-source Componax librarylink text, and there's some remarkable facilities in there; but he is in complete command of his small company with total control over code guidelines, and every line of code "passes through his hands." While this is speculation on my part : I think/guess other issues might come into play in a medium-to-large software team situation re use of Extensions. Looking at MS's guidelines at link text, you find : In general, you will probably be calling extension methods far more often than implementing your own. ... In general, we recommend that you implement extension methods sparingly and only when you have to. Whenever possible, client code that must extend an existing type should do so by creating a new type derived from the existing type. For more information, see Inheritance (C# Programming Guide). ... When the compiler encounters a method invocation, it first looks for a match in the type's instance methods. If no match is found, it will search for any extension methods that are defined for the type, and bind to the first extension method that it finds. And at Ms's link text : Extension methods present no specific security vulnerabilities. They can never be used to impersonate existing methods on a type, because all name collisions are resolved in favor of the instance or static method defined by the type itself. Extension methods cannot access any private data in the extended class. Factors that seem obvious to me would include : I assume you would not write an extension unless you expected it be used very generally and very frequently. On the other hand : couldn't you say the same thing about sub-classing ? Knowing we can compile them into a seperate dll, and add the compiled dll, and reference it, and then use the extensions : is "cool," but does that "balance out" the cost inherent in the compiler first having to check to see if instance methods are defined as described above. Or the cost, in case of a "name clash," of using the Static invocation methods to make sure your extension is invoked rather than the instance definition ? How frequent use of Extensions would affect run-time performance or memory use : I have no idea. So, I'd appreciate your thoughts, or knowing about how/when you do, or don't do, use Extensions, compared to sub-classing. thanks, Bill

    Read the article

  • Computed width with decimal values in Firefox, but without decimals in Webkit

    - by jävi
    Hello one more time! I have a strange problem working with HTML,CSS in different browsers: Firefox 3.6 and Webkit browsers (Chrome & Safari). My HTML looks like this: <div class="ln-letters"> <a href="#" class="all">ALL</a> <a href="#" class="a">A</a> <a href="#" class="b">B</a> <a href="#" class="c">C</a> </div> And my CSS is... .ln-letters a { font-family: 'Lucida Grande'; font-size:14px; display:block; float:left; padding:0px 7px; border-left:1px solid silver; border-right:none; text-decoration:none; } So as you can guess, each anchor gets a different width depending on its inner text. For example the first element with the text 'ALL' will be bigger (more width) than the others. Now the problem is that in Firefox (using Firebug) I can see that the computed width for the first element is 26.5667px, while in Chrome (using Chrome's developer tools) the computed width for the same element is exactly 27px. Therefore the div.ln-letters ends with different widths in each browser and that is causing me some troubles. Question is: there is any workaround to avoid Firefox computing decimal values? Or the opposite: to force Chrome to compute decimal values? Thank you in advance!

    Read the article

  • Why is python decode replacing more than the invalid bytes from an encoded string?

    - by dangra
    Trying to decode an invalid encoded utf-8 html page gives different results in python, firefox and chrome. The invalid encoded fragment from test page looks like 'PREFIX\xe3\xabSUFFIX' >>> fragment = 'PREFIX\xe3\xabSUFFIX' >>> fragment.decode('utf-8', 'strict') ... UnicodeDecodeError: 'utf8' codec can't decode bytes in position 6-8: invalid data What follows is the summary of replacement policies used to handle decoding errors by python, firefox and chrome. Note how the three differs, and specially how python builtin removes the valid S (plus the invalid sequence of bytes). by Python The builtin replace error handler replaces the invalid \xe3\xab plus the S from SUFFIX by U+FFFD >>> fragment.decode('utf-8', 'replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX The python implementation builtin replace error handler looks like: >>> python_replace = lambda exc: (u'\ufffd', exc.end) As expected, trying this gives same result than builtin: >>> codecs.register_error('python_replace', python_replace) >>> fragment.decode('utf-8', 'python_replace') u'PREFIX\ufffdUFFIX' >>> print _ PREFIX?UFFIX by Firefox Firefox replaces each invalid byte by U+FFFD >>> firefox_replace = lambda exc: (u'\ufffd', exc.start+1) >>> codecs.register_error('firefox_replace', firefox_replace) >>> test_string.decode('utf-8', 'firefox_replace') u'PREFIX\ufffd\ufffdSUFFIX' >>> print _ PREFIX??SUFFIX by Chrome Chrome replaces each invalid sequence of bytes by U+FFFD >>> chrome_replace = lambda exc: (u'\ufffd', exc.end-1) >>> codecs.register_error('chrome_replace', chrome_replace) >>> fragment.decode('utf-8', 'chrome_replace') u'PREFIX\ufffdSUFFIX' >>> print _ PREFIX?SUFFIX The main question is why builtin replace error handler for str.decode is removing the S from SUFFIX. Also, is there any unicode's official recommended way for handling decoding replacements?

    Read the article

  • How to stop my firefox extension which interferes other extension?

    - by ccppjava
    Hi, I have tried very hard to make my extension as simple as possible, it now do not contain any skin/css, it just have 'statusbar' in one single 'overlay'. The issue is that when installed, it hides the top three icon of 'all-in-one toolbar' extension of my firefox 3.6.3. On other two machine which do not have 'all-in-one toolbar', it hide all the icons of the web-development toolbar! chrome.manifest content stackoverflow content/ content stackoverflow content/ contentaccessible=yes overlay chrome://browser/content/browser.xul chrome://stackoverflow/content/browser.xul locale stackoverflow en-US locale/en-US/ browser.xul <overlay id="dch-browser-overlay" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <script type="application/x-javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"/> <script src="stackoverflow.js" /> <statusbar id="status-bar"> <statusbarpanel id="stackoverflow-status-bar-icon" class="statusbarpanel-iconic" src="chrome://stackoverflow/content/icon_small.png" tooltiptext="&runstackoverflow;" onclick="stackoverflow.run()" /> </statusbar> </overlay> I have tried very hard to simplify the extension to find the reason, but failed, any suggestion/ideas would be welcome. thx.

    Read the article

  • Extracting a .app from a zip file in Python, using ZipFile

    - by Yakattak
    I'm trying to extract new revisions of Chromium.app from their snapshots, and I can download the file fine, but when it comes to extracting it, ZipFile either extracts the chrome-mac folder within as a file, says that directories don't exist, etc. I am very new to python, so these errors make little sense to me. Here is what I have so far. import urllib2 response = urllib2.urlopen('http://build.chromium.org/buildbot/snapshots/chromium-rel-mac/LATEST') latestRev = response.read() print latestRev # we have the revision, now we need to download the zip and extract it latestZip = urllib2.urlopen('http://build.chromium.org/buildbot/snapshots/chromium-rel-mac/%i/chrome-mac.zip' % (int(latestRev)), '~/Desktop/ChromiumUpdate/%i-update' % (int(latestRev))) #declare some vars that hold paths n shit workingDir = '/Users/slehan/Desktop/ChromiumUpdate/' chromiumZipPath = '%s%i-update.zip' % (workingDir, (int(latestRev))) chromiumAppPath = 'chrome-mac/' #the path of the chromium executable within the zip file chromiumAppExtracted = '%s/Chromium.app' % (workingDir) # path of the extracted executable output = open(chromiumZipPath, 'w') #delete any current file there output.write(latestZip.read()) output.close() # we have the .zip now we need to extract the Chromium.app file, it's in ziproot/chrome-mac/Chromium.app import zipfile, os zippedFile = open(chromiumZipPath) zippedChromium = zipfile.ZipFile(zippedFile, 'r') zippedChromium.extract(chromiumAppPath, workingDir) #print zippedChromium.namelist() zippedChromium.close() #zippedChromium.close() Any ideas?

    Read the article

  • Problem: How to display a Wordpress RSS feed in a browser that doesn't have a built in RSS reader?

    - by StephenMeehan
    If I can, i'd rather not use a service like FeedBurner. My setup: I've setup a RSS feed link on a self-hosted Wordpress website, clicking the RSS link in Safari shows the feed - because Safari has a built in RSS reader. Great. Unfortunately clicking the same RSS link in Chrome displays the raw XML feed. I know why this happens - Chrome doesn't have a built in RSS reader. I also assume this will be the same in older versions of Internet Explorer. Possible solution? I've noticed http://www.bbc.co.uk/news has a nice solution: Click the RSS feed (top tight of the page) in a RSS enabled browser (Safari) and it uses the built in RSS reader to display the RSS feed. Click the same RSS feed link in Chrome (Chrome has no built in RSS reader) it displays the RSS feed using what looks like a custom page. Is there a way to check if a browser has a built in RSS reader? How would I provide alternative content (like the BBC site) to a browser that doesn't have a RSS reader installed? Any help on this would be brilliant, thanks for taking the time to read this. Stephen

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >