Search Results

Search found 2201 results on 89 pages for 'webpage'.

Page 53/89 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Suddely internet is not accessible

    - by user189708
    I am going crazy here. One day everything was working fine. I turned pc off and went to sleep. Next day turn pc on and cannot access internet (from any browser). The situation is: I cannot open any webpage from browser (tried Firefox and Epiphany) and cannot receive emails in thunderbird. BUT if I run firefox from console as sudo, I can use it as usual. I can access Skype and pretty much any other network stuff (like installing software with apt-get etc.), also if I use Astrill VPN software I can access webpages even running without sudo. I haven't install any software or anything like that for several days = I have not a clue what could cause this. Just by the way, other Win PC in our home has no issue. Here is what I have tried to fix this: I have tried to restart my pc, router, modem - multiple times I have tried to change permissions to my firefox profile I have tried to completely re-install firefox and start with blank profile, thus no addons I have tried to change /etc/resolv.conf to an IP of my router (it was 127.0.1.1) I have tried to change my hostname (from tomino-NB to tominoNB) I think I might try even more stuff. None of it works. Can someone please try to help me. Thank you UPDATE 1: I have tried this: Removing resolv.conf - Didn't help Also "ping" and "dig" commands cannot resolve host UPDATE 2: I have tried to edit nameservers in resolv.conf but still no effect. I can ping router as well as I can ping outside IP. So definitely just some DNS issue. Is it possible that something is rewriting path to resolv.conf and using different file? UPDATE 3: I have just restarted PC and everything works now... resolv.conf went back to nameserver 127.0.1.1 . I have no clue what happened that it works again...

    Read the article

  • Is djvubundle available in Ubuntu?

    - by Tim
    The official webpage says Assembling DjVu Images into Multipage Documents The batch compressors distributed as part of the DjVuText and DjVuLayered packages can directly produce multipage DjVu file when fed with multiple input files. The files produced are smaller than if the pages are compressed separately because the compressor can extract and share redundant information accross multiple pages. Individually compressed DjVu pages can be assembled into multipage documents using the free package DjVuMulti. To assemble a bunch of DjVu images into a single BUNDLED document simply type: djvubundle page1.djvu page2.djvu.... pageN.djvu document.djvu To assemble a bunch of DjVu images into an INDIRECT document, type: djvujoin page1.djvu page2.djvu.... pageN.djvu documentdir/index.djvu where documentdir must be an existing directory where all the individual page files will be copied. To disassemble a BUNDLED document into an INDIRECT one, simply say: djvujoin document.djvu documentdir/indexfile.djvu To convert a multipage document from one of the old 2.0 multipage formats, do djvureindex olddocument newdocument The programs djvujoin, and djvubundle supersede the 2.0 programs djvuindex and djvumerge. I couldn't find djvujoin and djvubundle for Ubuntu. djvulibre doesn't have them either. Do I miss something? Thanks.

    Read the article

  • LTS 12.04.1 will not resolve domain.local websites

    - by user108502
    I have done a brand new installation of the Ubuntu server (v12.10) with bind configured to have a dns zone of gdos.local and apache configured for said domain. With a brand new installation of Ubuntu desktop LTS I try to connect to www.gdos.local and all I get is: Server not found Firefox can't find the server at www.gdos.local. Check the address for typing errors such as ww.example.com instead of www.example.com However if I change the domain to gdos.tmp and type in www.gdos.tmp, I get the internal website. If I change it to mybusiness.local , I get the same error message. If I use a Microsoft os, this works fine, all three domains resolve to a webpage. I have searched the internet flat for the past week on dns issues but have not come up with a solution. I have followed instructions from removing dnsmasq to editing like resolv.conf (in some very strange places) and I still have no joy on getting the .local domain extension to work. I can safely say the issue is not with the server but with the desktops because if the issue was server related the Microsoft OS's would not resolve it either. I have done several installs of the desktop in an effort to make sure that I did not break anything while trying to fix this. Please can anyone point to a workable solution for fixing the .local domain extension. Thank you Mark Hollander

    Read the article

  • HTTP Protocal

    I have worked with the HTTP protocal for about ten years now and I have found it to be incredibly usefull for transfering data espicaly for remote systems and regardless of the network enviroment. Prior to the existance of web services, developers use to use HTTP to screen scrap data off of web pages in order to interact with remote systems, and then process the data as they needed. I use to use the HTTPWebRequest and HTTPWebRespones classes in order to screen scrap data from various sites that had information I needed to use if no web service was avalible. This allowed me to call just about any webpage and grab all of the content on the page. Below is piece of a web spider that I build about 5-7 years ago. The spider uses the HTTP protocal to requst webpages and then parse the data that is returned.  At the time of writing the spider I wanted to create a searchable index of sites I frequented. // C# 2.0 Framework// Creating a request for a specfic webpageHttpWebRequest webreq = (HttpWebRequest)WebRequest.Create(_Url); // Storeing the response of the webrequestwebresp = (HttpWebResponse)webreq.GetResponse(); StreamReader loResponseStream = new StreamReader(webresp.GetResponseStream()); _Content = loResponseStream.ReadToEnd(); // Adjust the Encoding of Responsestring charset = "";EncodeString(ref _Content, ref charset);loResponseStream.Close(); //Parse Data from the Respone_Content = _Content.Replace("\n", "");_Head = GetTagByName("Head", _Content);_Title = GetTagByName("title", _Content);_Body = GetTagByName("body", _Content);

    Read the article

  • AsyncBridge? Async on .NET 4.0 using VS11

    - by Alex.Davies
    I've just found something quite cool. It's a code snippet that lets you use the real VS 11 C#5 compiler to write code that uses the async and await keywords, but to target .NET 4.0. It was published by Daniel Grunwald (from SharpDevelop).That means I can stop using the Async CTP for VS2010, which is not at all supported anymore, and a pain to install if you have windows updates turned on. Obviously I couldn't ask all my users to install .NET 4.5 beta, but .NET Demon is a VS 2010 extension, so we already have .NET 4.0. At the time of writing, VS11 is in beta still, but hopefully it's stable enough for my team to use!I would have written the code myself, but I had the wrong impression that the C# 5 beta compiler only looked in mscorlib for the helper classes it needs to implement async methods. Turns out you can provide them yourself. You can get the code here: https://gist.github.com/1961087You just add it to your project, and the compiler will apparently pick it up and use it to implement async/await. I'm at my parents' place for Easter without access to a machine with VS 11 to try it out. Let me know whether you get it to work!This reminds me of LINQBridge, which let us use C# 3 LINQ, but only require .NET 2. We should stick up a webpage to explain, with a nice easy dll, put it in nuget, and call it AsyncBridge.If you were really enthusiastic, you could re-implement the skeleton of the Task Parallel Library against .NET 2 to use async/await without even requiring .NET 4. Our usage stats suggest that practically everyone that uses Red Gate tools already has .NET 4 installed though, so I don't think I'll go to the effort.

    Read the article

  • Why does 301 redirect work for http but not for https?

    - by Tom G
    Through my domain registrar I have set up a domain, essayme.co.uk, to automatically forward to https://google.com. If I go to http://essayme.co.uk it works as expected and redirects me to https://google.com. $curl -i http://essayme.co.uk HTTP/1.1 301 Moved Permanently Cache-Control: max-age=900 Content-Type: text/html Location: https://google.com Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sat, 07 Jun 2014 11:14:16 GMT Content-Length: 0 Age: 0 Connection: keep-alive However, if I go to https://essayme.co.uk it just freezes and times out. $curl -i https://essayme.co.uk curl: (7) Failed connect to essayme.co.uk:443; Operation timed out What is happening in the second case? (and, if possible, how can I get the redirect to work for https?) Problem background/clarification: I don't have an SSL certificate for the essayme.co.uk domain above, but I do for my live domain (let's call it mywebsite.com), and I was seeing the exact same problem on this domain (hence why I'm trying to debug the problem). Unfortunately I can't experiment with the live domain (as it's live) and I would like to avoid having to buy a second certificate for essayme.co.uk just for debugging (unless absolutely necessary). The problem I was seeing: my live domain, mywebsite.com (not its real name), has a valid SSL certificate. Visiting https://www.mywebsite.com displayed the webpage as expected. I had set up forwarding (like in the question above) from the naked domain (mywebsite.com) to https://www.mywebsite.com) Visiting http://mywebsite.com redirected to https://www.mywebsite.com as expected. However, visiting https://mywebsite.com would freeze and time out (as in the question above). I also tried forwarding it to http://www.otherwebsite.com as an experiment (i.e. forwarding to another site that does not use SSL), but the result was the same: Visiting http://mywebsite.com redirected to http://www.otherwebsite.com as expected. Visiting https://mywebsite.com would freeze and time out again. So I set up essayme.co.uk as an experiment to try and understand why it doesn't work.

    Read the article

  • Oracle 'In Touch' PartnerCast (July 1, 2014) - Be prepared for a year of growth

    - by Hartmut Wiese
    Dear Partner, We would like to invite you to join David Callaghan, Senior Vice President Oracle EMEA Alliances and Channels, and his studio guests for the next broadcast of the Oracle ‘In Touch’ PartnerCast on Tuesday 1st July 2014 from 10:30am UK / 11:30am CET. In this cast, David’s studio guests and his regional reporters will be looking at your priorities as EMEA partners and how best to grow with Oracle. We also look forward to the broadcast covering topics on the following: Highlights of FY14 Strategic themes for FY15 HCM, CRM and ERP Oracle on Oracle Exclusive for ‘In Touch’ David Callaghan questions Rich Geraffo, Senior Vice President, Global Alliances & Channels, on how the FY15 partner Global kick off relates to EMEA. Plus David provides your chance to hear from some of the newly appointed Worldwide A&C Leadership team as he discusses with Bruce Chumley VP Oracle Channel Distribution Sales & Troy Richardson VP Oracle Strategic Alliances; their core focus and strategy of growth and what they intend on bringing to the table in their new role. With lots of studio guests joining David, why not get in touch on Twitter using the hashtag #OracleInTouch or by emailing [email protected] to get your questions featured in the cast!   To find out more information and to watch previous episodes on-demand, please visit our webpage here. Best regards, Oracle EMEA Alliances & Channels

    Read the article

  • How to prevent duplication of content on a page with too many filters?

    - by Vikas Gulati
    I have a webpage where a user can search for items based on around 6 filters. Currently I have the page implemented with one filter as the base filter (part of the url that would get indexed) and other filters in the form of hash urls (which won't get indexed). This way the duplication is less. Something like this example.com/filter1value-items#by-filter3-filter3value-filter2-filter2value Now as you may see, only one filter is within the reach of the search engine while the rest are hashed. This way I could have 6 pages. Now the problem is I expect users to use two filters as well at times while searching. As per my analysis using the Google Keyword Analyzer there are a fare bit of users that might use two filters in conjunction while searching. So how should I go about it? Having all the filters as part of the url would simply explode the number of pages and sticking to the current way wouldn't let me target those users. I was thinking of going with at max 2 base filters and rest as part of the hash url. But the only thing stopping me is that it would lead to duplication of content as per Google Webmaster Tool's suggestions on Url Structure.

    Read the article

  • Ubuntu connects to wireless but doesn't work!

    - by UbuntuUser990
    I just installed Ubuntu 14.04 LTS on my laptop, it connects to my wireless network and shows that it is connected, but when I try to load a webpage or do anything like download Steam for Dota 2, it doesn't work. When I try to connect my Facebook account to Ubuntu, it doesn't work. I want to be able to use Ubuntu with working internet access. What do I do?. ifconfig this shows up eth0 Link encap:Ethernet HWaddr 88:ae:1d:60:56:eb inet addr:192.168.1.13 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1856 (1.8 KB) TX bytes:1856 (1.8 KB) When I type ping askubuntu.com this shows up ping: unknown host askubuntu.com

    Read the article

  • Windows Embedded Compact 7

    - by Valter Minute
    This will be the official name of the new release of Windows CE. Windows Embedded Compact 7 is available as a public CTP and it already supports a wide range of CPUs and both the device emulator and VirtualPC emulated environments. So I’ll have to learn a new (and longer) name for my favorite OS… but I (and all my two readers!) will be able to test it as soon as the download from connect web site completes (I'm sorry for my readers, but you'll have to download it by yourselves). Here’s a link for the download (it's free but you’ll have to register on connect with a valid LiveId): https://connect.microsoft.com/windowsembeddedce Remember that this is still a beta (or “Community Technology Preview” if you speak marketing language) and so it’s better to not install it on your main development PC (or, at least, backup everything before installation) and that the features and performances you’ll get from this beta may not be the same ones of the final release of the OS. You can discover the new features of Windows Embedded Compact on the new “official” webpage on microsoft website: http://www.microsoft.com/windowsembedded/en-us/products/windowsce/compact7.mspx or on Olivier’s blog: http://blogs.msdn.com/b/obloch/archive/2010/06/01/windows-embedded-compact-7-announced-and-public-ctp-available.aspx I hope to be able to post some interesting content about Windows Embedded Compact 7 soon (and maybe be able to shorten it’s name in CE7 in my blog posts, when I'll ensure that both my readers are not worketing for Microsoft's marketing department …). Technorati Tags: "Windows Embedded Compact 7"

    Read the article

  • Suggestions for a Live chat software on websites for customer support?

    - by Munish Goyal
    Recommendations needed. We want to get in touch with customers via live chat. Requirements: chat window customisable to mingle with website theme (colors etc) preferably the window should be within webpage and not only pop-out/popup. ease of use by customer minimally intrusive should have triggers/Alerts to backend side. for ex: user is unable to fill-up signup form or something, we should be able to offer help to user and this chat window automatically shows to user. What is the cost ? UPDATE: After R&D we also narrowed down to comm100 and liveperson, and we will go with comm100. LP is best commerical soln. but comm100 is a good free soln. We go with comm100 as starting point. But it gives out exceptions alerts sometimes in the dashboard. Can it be configured for chat invitations triggered on certain conditions. Currently only a timebased trigger is available. Any other major difference between these two ?

    Read the article

  • Strategy for hosting 700+ domains, each with static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to managing CNAMEs on the registrar-side, is there a way to quickly associate CNAMEs with new websites, maybe gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • Which of these URL scenarios is best for big link menus? [seo /user friendly urls]

    - by Sam
    Hi folks, a question about urls... me and a good friend of mine are exploring the possibilities of either of the three scenarios for a website where each webpage has a menusystem with about 130 links.: SCENARIO 1 the pages menu system has SHORT non-descriptive hyperlinks as well as a SHORT canonical: <a href:"design">dutch design</a> the pages canonical url points to e.g.: "design" OR SCENARIO 2 the pages menu system has SHORT non-descriptive hyperlinks wwith LONG canonical urls: <a href="design">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest OR SCENARIO 3 the pages menu system has LONG descriptive hyperlinks with LONG canonical urls: <a href="dutch-design-crazy-yes-but-always-honest">dutch design</a> the pages canonical url points to: dutch-design-crazy-yes-but-always-honest Currently we have scenario 2... should we progress to scenario 3? All three work fine and point via RewriteMod to the same page which is fetched underwater. Now, my question is which of these is better in terms of: userfriendlyness (page loading times, full url visible in url bar or not) seo friendlyness (proper indexing due to the urls containing descriptive relevant tags) other concerns we forgot like possible penalties for so many words in link hrefs?? Thanks very much for your suggestions: much appreciated!

    Read the article

  • Apache virtual hosts - Resources on website not loaded when accessed from other hostname than localhost

    - by Christian Stadegaart
    Running virtual hosts on Mac OS X 10.6.8 running Apache 2.2.22. /etc/hosts is as follows: 127.0.0.1 localhost 3dweergave studio-12.fritz.box 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Virtual hosts configuration: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "/opt/local/www/3dweergave" ServerName 3dweergave ErrorLog "logs/3dweergave-error_log" CustomLog "logs/3dweergave-access_log" common <Directory "/opt/local/www/3dweergave"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> ServerName main </VirtualHost> This will output the following settings: *:80 is a NameVirtualHost default server 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost main (/opt/local/apache2/conf/extra/httpd-vhosts.conf:34) I made 3dweergave the default server by putting it first in the list. This will cause all undefined virtual hosts' names to load 3dweergave, and thus http://localhost will point to 3dweergave. Of course, normally, the first in the list is the virtual host main and localhost will point to main, but for testing purposes I switched them. When I navigate to http://localhost, my CakePHP default homepage shows as expected: Screenshot 1 But when I navigate to http://3dweergave, my CakePHP default homepage doesn't show as expected. It looks like every relative link to resources are not accepted by the server: Screenshot 2 For example, the CSS isn't loaded. When I open the source and click on the link, it opens the CSS file in the browser without errors. But when I run FireBug while loading the webpage, it seems that the CSS file isn't retrieved. (<link rel="stylesheet" type="text/css" href="/css/cake.generic.css" />) How can I fix this unwanted behaviour?

    Read the article

  • Designs for outputting to a spreadsheet

    - by Austin Moore
    I'm working on a project where we are tasked to gather and output various data to a spreadsheet. We are having tons of problems with the file that holds the code to write the spreadsheet. The cell that the data belongs to is hardcoded, so anytime you need to add anything to the middle of the spreadsheet, you have to increment the location for all the fields after that in the code. There are random blank rows, to add padding between sections, and subsections within the sections, so there's no real pattern that we can replicate. Essentially, anytime we have to add or change anything to the spreadsheet it requires a many long and tedious hours. The code is all in this one large file, hacked together overtime in Perl. I've come up with a few OO solutions, but I'm not too familiar with OO programming in Perl and all my attempts at it haven't been great, so I've shied away from it so far. I've suggested we handle this section of the program with a more OO friendly language, but we can't apparently. I've also suggested that we scrap the entire spreadsheet idea, and just move to a webpage, but we can't do that either. We've been working on this project for a few months, and every time we have to change that file, we all dread it. I'm thinking it's time to start some refactoring. However, I don't even know what could make this file easier to work with. The way the output is formatted makes it so that it has to be somewhat hardcoded. I'm wondering if anyone has insight on any design patterns or techniques they have used to tackle a similar problem. I'm open to any ideas. Perl specific answers are welcome, but I am also interested in language-agnostic solutions.

    Read the article

  • Source code not matching uploaded HTML file

    - by benhowdle89
    I'm not sure if this is the right place to ask but i'm having a hugely frustrating problem with Coda and my website (i'm not sure which one is causing the issue) I'm using Coda to make changes to my website, Coda uses built in FTP to save changes to your web page. So when you hit Save, it uploads the new file. I've been using Coda for months and never had a problem until now. I am making changes in the html of my index.php and hitting save, it's successfully uploading the file but no changes are reflected in the source code in ANY browser. I even logged into cPanel on my website, ie. www.example.com:2082 and looked at the file - the changes have been made successfully. But the actual webpage in browser's source code, no changes?? I have tried adding which made no difference. Interestingly i make changes to style.css and the changes are instant. I have emptied the cache on all of my browsers but i'm still having an issue. Does this sound like a Coda problem or has anyone heard of such a thing?

    Read the article

  • Strategy for hosting 700+ domains names, each with a static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to manage CNAMEs on the registrar-side, but is there a way to quickly associate CNAMEs with new websites (on the hosting side), maybe using the method offered by gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • Web application interacts bi-directional with server program?

    - by Roelof Berkepeis
    I want to write a web application to play chess against the engine Crafty. I'm not new to PHP and javascript, but must learn how to interact with a server process : how can a web application and/or (jQuery) ajax interact bi-directionally with a (linux) program running on the server? At this moment i am developing on (Apache) local host. Crafty is installed on my Ubuntu PC. This well-known chess engine has no GUI, it runs in terminal by the command $ /usr/games/crafty and so you can play chess against it and even see it's calculations. I can make Crafty run by PHP, using the functions proc_open() or exec(), and most documentation i found states that the output stream should be a file .. But i think i don't want such setup, because then the webpage should be constanty polling that file (eg. by ajax) to see if some new data was appended, right? How can Crafty talk to the web page directly, saying "i have calculated another variation" or "i have decided a move" etc, then display this info on the web page and let the user give some counter move, just like in terminal. Isn't it possible to use some session / stream / listener? I have no clue at all, can anybody point me in a right direction?

    Read the article

  • Brightness going up to 100% on loading certain websites in Chrome

    - by picheto
    I'm using Google Chrome version 21.0.1180.89 on Ubuntu 12.04 and my laptop is a Sony VAIO VPCCW15FL (spec sheet). My video driver is the propietary "NVIDIA accelerated graphics driver (post-release updates)(version-current updates)". After installing Ubuntu, I discovered that neither the brightness control buttons (hardware) or the brightness slider (software) worked, and found out I could get the hardware buttons to work by installing the nvidiabl.deb package and oBacklight script. I'm using nvidiabl-dkms 0.77 and oBacklight 0.3.8. Still, the slider on the Ubuntu "Settings" does not work, but I don't care. There is an annoying thing happening when loading certain pages in Google Chrome: the brightness goes up to 100% when loading the webpage or when leaving it (closing the tab or typing a different URL on the omnibox). However, the "brightness tooltip" (that default brightness notification) remembers the position it was set to, so if I adjust the brightness with the HW buttons, the level gets adjusted relative to the value it was set to before "going 100%". I disabled the flash PPAPI plugin, but left the NPAPI plugin enabled, and the problem went away for pages with flash content. Still, the same thing happens when viewing HTML5 video, or when loading, for example, the Chrome Web Store or using the Scratchpad extension. I suppose it has to do with the rendering of certain elements using the GPU, but this is just a guess. This brightness thing does not happen when using Firefox 15.0 or any other application I have used yet. Does anybody know why this may be happening and what could I do to fix this without changing browser? Thanks a lot.

    Read the article

  • Is there a way to save MS Word document as HTML w/o the ms proprietary stuff?

    - by sequoia mcdowell
    So normally I wouldn't use this feature ("Save as Web Page") but I have large documents from clients they just want put on their site as HTML, and formatting it all by hand seems like a waste of time. I have tried "save as webpage" in Word 2007, but it produces all sorts of bad stuff. To wit: <b style='mso-bidi-font-weight:normal'> <span style="mso-spacerun: yes"> as well as a large block of XML formatting info: <!--[if gte mso 9]><xml> <o:DocumentProperties> <o:Subject> </o:Subject> <o:Author> </o:Author> <o:Keywords> </o:Keywords> ... As I said, formatting it all by hand seems like a waste of time, but the way MS exports currently just has too much cruft. Is there a way to export MS Word doc as html without all this?

    Read the article

  • Blogger Blog Takes Ages to Load after Custom Domain Redirection

    - by abhisek
    I recently bought a custom domain for a blogger blog (technabled.com) I have for sometime now. I followed the instructions on blogger's documentation. I added A-name records and CNAME records with my DNS provider. But, now, some strange problems are cropping up. If I connect to my broadband network and then ping technabled.com, it times out. Then, if I visit the webpage, which takes almost one and half minutes to load, and then if I ping technabled.com, it shows expected result. This is not just me. I asked some of the regular readers, who reported the same issue. As a result of this, I am losing a lot of visits. What is stranger is that the subsequent visits to the blog is faster. I have checked with a few online services to test the performance. WebPageTest seems to say the same thing: http://www.webpagetest.org/result/110117_1N_7PE/ (please see the First View / Repeat View time) Also, the pagespeed score is not that bad. So I am ruling out other possibilities. I am at a loss as to what I should do to find a solution. Help is much appreciated. :)

    Read the article

  • Website migration is not working for all computers

    - by Shadowizoo
    We got 2 servers on same network, Server-A and Server-B. On Server-A (widows server 2003), we have IIS 5.2 and our website was hosted on it few month ago (about 7-8 months). We bought a new server, Server-B (Windows Server 2008) with IIS 7.5 and copied our old website on this new machine. On our router, we forward the port 80 to Server-B. The Server-A is still on because we need to access some old data by our old website. I would like to access it with it's internal Ip (192.168.1.xxx/mywebsite) On my Windows 7 computer, if I write www.example.com or example.com (without www.), I'm being redirected to Server-B and I can see our new interface. On some Windows Vista computer, example.com (without www.) redirect to Server-B, but if I write www.example.com, I'm still on Server-A. In our website code (on Server-B), we sometimes redirect with a "www." so this is causing some error because we are trying to access a webpage that exist on Server-B but not Server-A and because the www.example I compared 2 computers with Vista Home on them and Internet Options looks the same. I cannot figure why this is happening

    Read the article

  • What are the common techniques to handle user-generated HTML modified differently by different browsers?

    - by Jakie
    I am developing a website updater. The front end uses HTML, CSS and JavaScript, and the backend uses Python. The way it works is that <p/>, <b/> and some other HTML elements can be updated by the user. To enable this, I load the webpage and, with JQuery, convert all those elements to <textarea/> elements. Once they the content of the text area is changed, I apply the change to the original elements and send it to a Python script to store the new content. The problem is that I'm finding that different browsers change the original HTML. How do you get around this issue? What Python libraries do you use? What techniques or application designs do you use to avoid or overcome this issue? The problems I found are: IE removes the quotes around class and id attributes. For example, <img class='abc'/> becomes <img class=abc/>. Firefox removes the backslash from the line breaks: <br \> becomes <br>. Some websites have very specific display technicalities, so an insertion of a simple "\n"(which IE does) can affect the display of a website. Example: changing <img class='headingpic' /><div id="maincontent"> to <img class='headingpic'/>\n <div id="maincontent"> inserts a vertical gap in IE. The things I have unsuccessfully tried to overcome these issues: Using either JQuery or Python to remove all >\n< occurences, <br> etc. But this fails because I get different patterns in IE, sometimes a ·\n, sometimes a \n···. In a Python, parse the new HTML, extract the new text/content, insert it into the old HTML so the elements and format never change, just the content. This is very difficult and seems to be overkill.

    Read the article

  • Google Analytics Social Tracking implementation. Is Google's example correct?

    - by s_a
    The current Google Analytics help page on Social tracking (developers.google.com/analytics/devguides/collection/gajs/gaTrackingSocial?hl=es-419) links to this page with an example of the implementation: http://analytics-api-samples.googlecode.com/svn/trunk/src/tracking/javascript/v5/social/facebook_js_async.html I've followed the example carefully yet social interactions are not registered. This is the webpage with the non-working setup: http://bit.ly/1dA00dY (obscured domain as per Google's Webmaster Central recommendations for their product forums) This is the structure of the page: In the : ga async code copied from the analytics' page a script tag linking to stored in the same domain. the twitter js loading tag In the the fb-root div the facebook async loading js including the _ga.trackFacebook(); call the social buttons afterwards, like so: (with the proper URL) Tweet (with the proper handle) That's it. As far as I can tell, I have implemented it exactly like in the example, but likes and twitts aren't registered. I have also altered the ga_social_tracking.js to register the social interactions as events, adding the code below. It doesn't work either. What could be wrong? Thanks! Code added to ga_social_tracking.js var url = document.URL; var category = 'Social Media'; /* Facebook */ FB.Event.subscribe('edge.create', function(href, widget) { _gaq.push(['_trackEvent', category, 'Facebook', url]); }); /* Twitter */ twttr.events.bind('tweet', function(event) { _gaq.push(['_trackEvent', category, 'Twitter', url]); });

    Read the article

  • Web hosting company basically forces me to use their domain name [closed]

    - by Jinx
    I've recently stumbled upon an unusual problem with one of hosting companies called giga-international.com. Anyway, I've ordered com.hr domain from Croatian domain name registration company, and my client insisted on using this host provider as couple of his friends already are hosted with them. I thought something was fishy when the first result on Google for Giga International was this little forum rant instead of their webpage. When I was checking their services they listed many features etc... space available, bandwidth etc. I just wanted to check how much ram do I get for my PHP scripts so I emailed them, and they told me that was company secret. Seriously? Anyway, since my client still insisted on hosting with them I've bought their Webspace package. During registration I had to choose free domain name because I couldn't advance registration without it. Nowhere was said, not even in general terms and conditions that I wouldn't be able to change that domain name. At least not for double the price of domain name per year. They said I can either move my domain name over to them (and pay them domain registration), or pay them 1 Euro per month for managing a DNS entry. On any previous hosting solution I was able to manage my domain names just by pointing my domain to their name servers, and this is something completely new and absurd for me. They also said that usual approach is not possible because of security and hardware limitations. I'd like to know what you guys think about this case, and should I report, and where should I report this case. In short. They forced me to register free domain name which doesn't suit my needs in order to register for their webspace package, and refuse to change domain name for my account until I either transfer domain to them or pay them DNS management which costs double the price of the domain name per year.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >