Search Results

Search found 4185 results on 168 pages for 'webpage screenshot'.

Page 24/168 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Is it possible to hide the cursor in a webpage using CSS or Javascript?

    - by yeyeyerman
    I want to hide the cursor when showing a webpage that is meant to display information in a building hall. It doesn't have to be interactive at all. I tried with the cursor property and a transparent cursor image but I didn't make it work. Does anybody know if this can be done? I suppose this can be thought as a security threat for a user that can't know where he is clicking on, so I'm not very optimistic... Thank you!

    Read the article

  • PHP: get contents of a remote webpage that require authentication?

    - by powerboy
    I receive many emails whose contents are of the same pattern, e.g. Name: XXX Phone: XXX Bio: XXX ...... I want to write a PHP script to parse these emails and save the data into a database. I tried to use file_get_contents() to get the contents of these emails. The problem is that, it requires authentication to access to my emails. Even though I have signed into my email account on the server (localhost actually), file_get_contents() still return the webpage that prompts me to sign in. So how to access the contents of my emails with a PHP script? EDIT: it is a gmail account.

    Read the article

  • All browsers refusing to load a specific image on a webpage?

    - by Johnson
    Out of nowhere today, all 3 of my browsers (FF/Chrome/IE, OS = Win7 x64) are refusing to load the homepage of interfacelift.com correctly. It works fine on other PC's in the house (on the same network), so it is definitely related to this one PC. The browser won't load the main image on the page correctly (even though the source code looks good), however if I direct the browser to the exact location of that image, then it displays fine. So obviously I can get the HTML index (which locates the resource) and I can get to the resource. So why heck isn't it displaying properly on the index page? It's almost as if the HTML rendering engine has gone bad, on all 3 browsers at once. I've browsed to a bunch of other sites (including sites very heavy on JS, with HTML much more complex than the one in question here) and am seeing nothing funny. Only thing wonky I've done with my PC in the past several hours was replacing the system file Magnifier.exe with a copy of cmd.exe while playing around with some of the ideas mentioned in this guide. However, I've since then restored the files to their previous state, and I don't know how Magnifier would be related to this even if I hadn't restored it. Any ideas? I'm stumped! EDIT: Here is what the broken page looks like in Chrome. And here is the image loaded correctly by itself.

    Read the article

  • What tools can be used to download all images in a webpage?

    - by bobo
    I would like to download all images in a web page. The tool should be smart enough to examine the css and javascript files in the page source to look for the images. Ideally, it should also replicate the folder hierarchy, saving the images in the correct folder. For example, the web page may have some images for menu items stored in images/menu/ and for background images it may be stored in images/bg/. Is there such a tool that you know of? (preferably in Windows but Linux is still ok) Many thanks to you all.

    Read the article

  • Is there a real code-free webpage design tool ?

    - by Sefler
    Recently I was doing web design. I found that the current web design tool (like Expression Web, Dreamweaver) is terribly coupled with code. Though I managed to use HTML, CSS and many others, I found those tool not free enough when came to design. What I want is a totaly code-free design tool with which I can use to draw the layout, paste pictures, add texts and so on. It doesn't need to have functionality to covert the design into code because I can do it myself. That is to say, I need the software to create a blueprint for me. I'm currently using Photoshop to do this. However it is too stupid in displying the layout (It can't show the width and many other attributes, I had to draw them by myself). Can you find one for me? Thanks in advance.

    Read the article

  • Why does my webpage look different when I connect using different routers?! Does routers cache files?

    - by Ayyash
    Here is the case, I am working on a site from office and home, I recently updated the stylesheets and logged in the live site from office (using my same laptop I use all the time), and everything looks okay, I come home use my home internet connection to connect to the site using the SAME laptop, the styles are not updated! The thing is: this happens on ALL browsers, and after emptying the cache many times, and even after one month of work, and even if I have never opened the site before on that browser (as if my router has a cache of its own) Another thing: only one particular styles.css file seem to be hanging Extra info: I use the same IP for my home wireless router as that defined in the office, the usual 192.168.0.1

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • Why isn't the background image showing up on my webpage?

    - by William
    okay, so I'm trying to set up a webpage with a div wrapping two other divs, and the wrapper div has a background, and the other two are transparent. How come this isn't working? here is the CSS: .posttext{ float: left; width: 70%; text-align: left; padding: 5px; background-color: !important #transparent; } .postavi{ float: left; width: 100px; height: 100%; text-align: left; background-color: #transparent; padding: 5px; } .postwrapper{ background-image:url('images/post_bg.png'); background-position:left top; background-repeat:repeat-y; } and here is the HTML: <div class="postwrapper"> <div class="postavi"><img src="http://prime.programming-designs.com/test_forum/images/avatars/hacker.png" alt="hacker"/></div><div class="posttext"><p style="color: #ff0066">You will have bad luck today.</p>lol</div> </div>

    Read the article

  • C# - WebBrowser control seems to cache screenshots

    - by Justin
    Hey, I'm using the WebBrowser control in an ASP.NET MVC 2 app (don't judge, I'm doing it in an admin section only to be used by me), here's the code: public static class Screenshot { private static string _url; private static int _width; private static byte[] _bytes; public static byte[] Get(string url) { // This method gets a screenshot of the webpage // rendered at its full size (height and width) return Get(url, 50); } public static byte[] Get(string url, int width) { //set properties. _url = url; _width = width; //start screen scraper. var webBrowseThread = new Thread(new ThreadStart(TakeScreenshot)); webBrowseThread.SetApartmentState(ApartmentState.STA); webBrowseThread.Start(); //check every second if it got the screenshot yet. //i know, the thread sleep is terrible, but it's the secure section, don't judge... int numChecks = 20; for (int k = 0; k < numChecks; k++) { Thread.Sleep(1000); if (_bytes != null) { return _bytes; } } return null; } private static void TakeScreenshot() { try { //load the webpage into a WebBrowser control. using (WebBrowser wb = new WebBrowser()) { wb.ScrollBarsEnabled = false; wb.ScriptErrorsSuppressed = true; wb.Navigate(_url); while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); } //set the size of the WebBrowser control. //take Screenshot of the web pages full width. wb.Width = wb.Document.Body.ScrollRectangle.Width; //take Screenshot of the web pages full height. wb.Height = wb.Document.Body.ScrollRectangle.Height; //get a Bitmap representation of the webpage as it's rendered in the WebBrowser control. var bitmap = new Bitmap(wb.Width, wb.Height); wb.DrawToBitmap(bitmap, new Rectangle(0, 0, wb.Width, wb.Height)); //resize. var height = _width * (bitmap.Height / bitmap.Width); var thumbnail = bitmap.GetThumbnailImage(_width, height, null, IntPtr.Zero); //convert to byte array. var ms = new MemoryStream(); thumbnail.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); _bytes = ms.ToArray(); } } catch(Exception exc) {//TODO: why did screenshot fail? string message = exc.Message; } } This works fine for the first screenshot that I take, however if I try to take subsequent screenshots of different URL's, it saves screenshots of the first url for the new url, or sometimes it'll save the screenshot from 3 or 4 url's ago. I'm creating a new instance of WebBrowser for each screenshot and am disposing of it properly with the "using" block, any idea why it's behaving this way? Thanks, Justin

    Read the article

  • why is link generated in YUI javascript failing to render in rails?

    - by pmneve
    Using YAHOO.widget.treeview to generate a table with three levels of data: module, submodule, and detail. If there is an image associated with a detail row the javascript generates a link: "<td><a href=\"/screenshot/show/" + rowData.id + "\">Screenshot</a></td>" that is appended to the html for the row. The url is generated correctly and the link appears. When clicked nothing happens except the word 'Done' appears in the browser status bar. Am calling the very same url from another page that does not use javascript and the screenshot page appears as expected. Here is the controller. class ScreenshotController < ApplicationController def show if @detail.screen_path.length 1 @imagePath = "#{RAILS_ROOT}" + "/private/#{Company.find(@detail.company_id).subdir}/" + "#{Project.find(@detail.project_id).subdir}/screenshot/" + "#{@detail.screen_path}" send_file ( @imagePath, :type = 'image/jpeg', :disposition = 'inline') end end end A sample url: http://localhost:3004/screenshot/show/20854 This code from show.html.erb belonging to the detail model works: <%= link_to 'View', :controller = 'screenshot', :id = @detail.id, :action = 'show' % Any ideas???

    Read the article

  • In HLSL pixel shader , why is SV_POSITION different to other semantics?

    - by tina nyaa
    In my HLSL pixel shader, SV_POSITION seems to have different values to any other semantic I use. I don't understand why this is. Can you please explain it? For example, I am using a triangle with the following coordinates: (0.0f, 0.5f) (0.5f, -0.5f) (-0.5f, -0.5f) The w and z values are 0 and 1, respectively. This is the pixel shader. struct VS_IN { float4 pos : POSITION; }; struct PS_IN { float4 pos : SV_POSITION; float4 k : LOLIMASEMANTIC; }; PS_IN VS( VS_IN input ) { PS_IN output = (PS_IN)0; output.pos = input.pos; output.k = input.pos; return output; } float4 PS( PS_IN input ) : SV_Target { // screenshot 1 return input.pos; // screenshot 2 return input.k; } technique10 Render { pass P0 { SetGeometryShader( 0 ); SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); } } Screenshot 1: http://i.stack.imgur.com/rutGU.png Screenshot 2: http://i.stack.imgur.com/NStug.png (Sorry, I'm not allowed to post images until I have a lot of 'reputation') When I use the first statement (result is first screenshot), the one that uses the SV_POSITION semantic, the result is completely unexpected and is yellow, whereas using any other semantic will produce the expected result. Why is this?

    Read the article

  • Moving screenshots from 1 folder to another instantly

    - by Frank
    I am hosting a gaming site and once in a while the server automatically creates a screenshot in the servers folder. Unfortunatly it is NOT configurable in the server settings where it puts these screenshots, it just allways dumps the PNG file in the server config folder. My folder structure is for example as follows: /home/Game/Server1/ now, what I would like to achieve is that once the server creates a screenshot in 1 of these server folders (I have multiple), the operating system moves the screenshot IMMEDIATLY (which is ALLWAYS a *.png file) to the webserver folder, for example: /var/www/Server1/filename.png So that players can see the screenshot on the website. Anyone any idea on how I can tackle this problem the smartest way? Please note that my ideal situation would be if the PNG file is moved immediatly after creation. Thanks for your help. Frank

    Read the article

  • C# Threading Background Process - Programming - How to?

    - by Magic
    Hello...I have been given the horrible task of doing this. Launch the website Take a screenshot Fill in the form details, click on Next Take a screenshot ... ... ... Rinse. Repeat. Now, with various combinations, this comes up to 300 screenshots. And I have to do this for 4 different browsers. Chrome, Firefox, IE 6 and IE 7. I cannot use tools which will capture the screenshot and store them, such as, SnagIT. I need to take a screenshot, copy it to a Word Document and take the second screenshot and take it to a Word Document. I thought, I will write a tiny utility which will help me do this. Here is the requirement spec that I put up for it - An executable which once launched seats itself in the System Tray. While it is active, all instances of Key Press (Print Scrn), it should write the contents to a Word Document as defined (either a default path or a user defined one). Save the document periodically. Now, my question is - if I am going to develop this using C# (Winforms application), how do I go about doing this. I can do a fair bit of C# programming and I am willing to learn. But I am not able to locate the references for how to do a background process so that it runs in the background. And while it runs, it has to capture the Print Scrn command. Can you folks point me to the right material where I can learn this? Theoretical references should suffice. But if there are practical references, then nothing like it. Thanks!

    Read the article

  • Ubuntu server upgrade 11.04 to 11.10 fails

    - by DLosc
    I'm actually trying to upgrade my Ubuntu 11.04 server to 12.04, but I have to go through 11.10 first, right? Well, do-release-upgrade is failing miserably. Here are some representative screenshots: Screenshot 1 Screenshot 2 Screenshot 3 Screenshot 4 And finally... Yeah. I found this question, which appears to have similar errors, but I've tried all of the suggestions given there and nothing has changed. I tried running apt-get dist-upgrade; it churned for a while and eventually came back with "Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?" Ran apt-get update and got much the same kinds of error messages I got from do-release-upgrade. Any ideas/suggestions/solutions? Should I try downloading & upgrading from the CD instead? I'm glad to provide any further information I've forgotten.

    Read the article

  • HTML: How to create a DIV with only vertical scroll-bar to show long paragraphs on a webpage?

    - by Awan
    I want to show terms and condition note on my website. I dont want to use text field and also dont want to use my whole page. I just want to display my text in selected area and want to use only vertical scroll-bar to go down and read all text. Currently I am using this code: <div style="width:10;height:10;overflow:scroll" > text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text text </div> It is not fixing the width and height and spread until the all text appears. Second it is showing horizontal scroll-bar and I don't want to show it. Any Idea ? Thanks

    Read the article

  • How to simulate a fake MouseOver on a Flash applet in a webpage?

    - by Mason Wheeler
    I listen to internet radio at http://player.play.it/player/player.htm and it works pretty well, except for one minor issue. The Flash applet that runs the radio player has a timer on it, where if you don't move the mouse over the player every once in a while, it decides you're idle and shuts off the stream, even if you're not actually idle, but just working on something else with the radio player running in the background. Is there any way I can send a fake MouseOver message to this applet to keep it from cutting me off in the middle of a song, maybe with a GreaseMonkey script? I'm using Firefox.

    Read the article

  • How can a Firefox extension inject a local css file into a webpage?

    - by Evgeny Shadchnev
    I'm writing a Firefox extension that needs to inject a css file into webpages. The css file is bundled with the extension, so I can access it using a chrome url chrome://extensionid/content/skin/style.css I'm trying to inject css like this when the page is loaded: var fileref = document.createElement("link"); fileref.setAttribute("rel", "stylesheet"); fileref.setAttribute("type", "text/css"); fileref.setAttribute("href", "chrome://extensionid/content/skin/style.css"); document.getElementsByTagName("head")[0].appendChild(fileref); However, the css isn't loaded and Firebug shows 'Filtered chrome url' message instead of the file content, when I inspect the link element I created. If I try to load this css file from an external server, everything's fine. Is there are way to load a css file bundled with the extension?

    Read the article

  • How do I output a webpage that contains MathML to PDF?

    - by samiz
    My web application displays MathML embedded in HTML using the MathPlayer plugin. I need to output to PDF. I have PDF components (Dynamic PDF, ABCpdf), but they don't know how to parse the MathML, of course. Is there a library that can help me translate the MathML to an image or something that I can feed to the PDF components on the fly in the web application?

    Read the article

  • Stack Trace in error in Webpage in ASP not in my code?

    - by MarceloRamires
    I'm new to web-development in ASP, and I'm experiencing a problem where I try to access a certain page through a link and I get an error, the first part says it's an exception, then tips on debugging and then the stacktrace. What happens is that this code isn't on my application, I've had errors like this before, and the peace of code that appeared usually helped me a lot.

    Read the article

  • How does one target all divs of any webpage but differentiate them in javascript?

    - by Chaz
    So I am trying to create an extension in Chrome (a prototype for a project that I am doing) that targets all of the <div> tags of any web page, hides them or rather doesn't display them until the user clicks the mouse (further explained below). So typing a url into the browser yields a white page. The person clicks, and the first <div> appears (probably the mast head or menu). The user clicks again and the second <div> appears. I have gotten to the point where I can hide or show all <div>'s (the obvious easy part) but I am not sure how to go about targeting each since every website has different id's for them while still using the <div> tag. This is what I need the most help with. This is part of a grander operation called the Web Crank. It's just a physical crank that controls the speed by which a web page loads. Each time you make one full rotation of the crank, one section (the first <div>) of the web page loads. The faster you go, the quicker the page loads. I hope this is clear enough. I am a newbie when it comes to this, but I have done some minor coding in the past and it's not such a big deal. Thanks for your help!

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >