Search Results

Search found 16189 results on 648 pages for 'document conversion'.

Page 38/648 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Converting string to a simple type

    - by zespri
    .Net framework contains a great class named Convert that allows conversion between simple types, DateTime type and String type. Also the class support conversion of the types implementing IConvertible interface. The class has been implemented in the very first version of .Net framework. There were a few things in the first .Net framework that were not done quite right. For example .Parse methods on simple types would throw an exception if the string couldn't be parsed and there would be no way to check if exception is going to be thrown in advance. A future version of .Net Framework removed this deficiency by introducing the TryParse method that resolved this problem. The Convert class dates back to time of the old Parse method, so the ChangeType method on this class in implemented old style - if conversion can't be performed an exception is thrown. Take a look at the following code: public static T ConvertString<T>(string s, T @default) { try { return (T)Convert.ChangeType(s, typeof(T), CultureInfo.InvariantCulture); } catch (Exception) { return @default; } } This code basically does what I want. However I would pretty much like to avoid the ugly try/catch here. I'm sure, that similar to TryParse, there is a modern method of rewriting this code without the catch-all. Could you suggest one?

    Read the article

  • Converting Future Composer FC14 tracker music to MIDI data

    - by okw
    I have an old .FC music file from the Amiga/C64 days. It was made using Future Composer. The first four bytes of the file read as FC14 in ASCII; I'm pretty sure that's the version number. I need to dump the channels and their notes into a standard MIDI file in order to play it through my MIDI devices. Is there a way to do this with existing programs? If not, are there any specifications available on the format of these files? I do not require the samples, and I am aware that they will be lost during the conversion process.

    Read the article

  • In SSIS Convert European Currency Format to United States Currency Format

    - by Rob
    I have an interesting problem. I have an SSIS package that processes account data. We are now processing files from Europe. These files are in a CSV format using text qualifiers. For an example of the problem: In the United States the currency format is 123456.99 (We purposely leave the thousands separator out). The files sent from Europe are coming in with two formats. One is 123456,99 and the other is 123.456,00. SSIS is attempting to parse the text file and place it into a NUMERIC(20,2) field. This causes a parsing error in SSIS even with the text qualifiers. If I change the field to CURRENCY it sends a conversion error. I would like for SSIS to deal with this directly without requiring the data to be in the United States format. Has anyone had this problem? Any help will be greatly appreciated. Rob

    Read the article

  • How to convert series of MP3 to a M4B in a batch

    - by Artem Tikhomirov
    Hello. I have a batch of MP3 based books. Some of them divide into files according to book's own structure: chapters and so on. Some of them was just divided into equally lengthened parts. So. I've bought an iPhone, and I want to convert them all to M4B format. How could I convert them in a batch? I mean how cold I set up a process once, for each book, and then, after couple of weeks, receive totally converted library. The only able program for such conversion I've found was Audiobook Builder for a Mac. But it is pretty slow and do not support batching in principle. Solutions for any platform, please.

    Read the article

  • .vob to h.264 MP4 Files - Worth The Effort?

    - by harper89
    When I was doing the converting to digital format a while back I chose .VOB due to no quality loss. However recently I have been informed of this h.264 compression method. Time is not an issue here, I don't mind waiting for conversions etc. I also understand that any sort of compression will reduce quality. To test I converted a 4GB .VOB to a .mp4 using h264 in handbrake and the quality loss was very very very hard to notice. From what I have understood through research Space = .mp4(h.264) Quality = .Vob Playback = Both equally supported? But these concerns have yet to be answered: My comparison was done on a computer monitor, would the quality loss be substantially noticable if I purchased a 50 inch TV in the future? Is this type of file highly supported? (I don't want to experience incompatible players) What other issues could a conversion of files such as this cause in the future?

    Read the article

  • How to convert series of MP3 to a M4B in a batch

    - by Artem Tikhomirov
    I have a batch of MP3 based books. Some of them divide into files according to book's own structure: chapters and so on. Some of them was just divided into equally lengthened parts. So. I've bought an iPhone, and I want to convert them all to M4B format. How could I convert them in a batch? I mean how cold I set up a process once, for each book, and then, after couple of weeks, receive totally converted library. The only able program for such conversion I've found was Audiobook Builder for a Mac. But it is pretty slow and do not support batching in principle. Solutions for any platform, please.

    Read the article

  • How do I convert an animated GIF to a YouTube friendly video format?

    - by Dave Webb
    My son has made some animations with Pivot Stickfigure Animator which we'd like to upload to YouTube. The problem is Pivot saves as animated GIFs which I can't upload to YouTube. The Wikipedia article recommends using Windows Movie Maker to convert GIF to WMV, but unfortunatley I'm using Window 7 for which you can get the new Windows Live Movie Maker which doesn't seem to support GIFs. I Googled and found an article which said to use Beneton Movie GIF to convert animated GIF to AVI, but this seemed to rely on a 3rd Party application which wasn't installed and so failed. Installing the missing application - pjBmp2Avi - by hand and adding it to the path still didn't allow Beneton to do the conversion. I hoped FFmpeg might do the trick but this only outputs to animated GIFs, it won't read from then. Further Googling found lots of applications with 30 day trials and so on but I was hoping for something free. So any suggestions on how I can convert an animated GIF to a movie file on Windows using free (as in beer) software?

    Read the article

  • How do I convert an animated GIF to a YouTube friendly video format?

    - by Dave Webb
    My son has made some animations with Pivot Stickfigure Animator which we'd like to upload to YouTube. The problem is Pivot saves as animated GIFs which I can't upload to YouTube. The Wikipedia article recommends using Windows Movie Maker to convert GIF to WMV, but unfortunately I'm using Windows 7 for which you can get the new Windows Live Movie Maker which doesn't seem to support GIFs. I Googled and found an article which said to use Beneton Movie GIF to convert animated GIF to AVI, but this seemed to rely on a 3rd Party application which wasn't installed and so failed. Installing the missing application - pjBmp2Avi - by hand and adding it to the path still didn't allow Beneton to do the conversion. I hoped FFmpeg might do the trick but this only outputs to animated GIFs, it won't read from then. Further Googling found lots of applications with 30 day trials and so on but I was hoping for something free. So any suggestions on how I can convert an animated GIF to a movie file on Windows using free (as in beer) software?

    Read the article

  • Video converters don't work anymore after reinstalling Windows

    - by tassiekev
    A few days ago, I decided to reinstall Windows 7 as my HD partition seemed to be nearly full and things were slowing down. I'd been using Handbrake almost exclusively to convert TV recordings and used Freemake on occasion. Following the reinstall, I can't get either to work: Handbrake says it's encoding for about 2 seconds and then says it's finished, but there are no converted files of any size. Freemake just says 'Conversion Error' and won't go any further. As an experiment I tried two programs that I don't normally use, VideoReDo & Any Video Converter. Both worked fine. Anyone got any clues?

    Read the article

  • PDF to HTML - batch converter - most reliable and accurate free AND paid for software?

    - by Rob
    I'm look for either a free or paid-for (about 50$/40pounds) BATCH PDF to HTML converter to convert several PDF files at once. Needs to be able to handle vectored and bitmap images within the file, outputting both as jpegs referenced by the html pages. I've tried iorigsoft paid-for PDF to HTML - problems it seems to hang or just go idle, and the stuff it actually converts have broken links - the wrong name is used for constituent chapters as html. Also tried application from intrapdf.com but this crashes near the beginning of the conversion, consitently. Looked at opensource tools but they look equally flakey or use old PDF versions. Need it on Windows 7 32bit home. Thoughts?

    Read the article

  • text to image conversion with JSON response

    - by ruhit
    i have made an application to convert text into image and it working out fine,now i am using JSON for conversion,it also working but except only two fields.....why i dont know, my codes are given below...please help me , is there any better way? // img.html excoflare enter your text here: Font Size:          Color:               Font:                 Height:              Width:                                            img.php require_once 'JSON/JSON.php'; header('Content-type: application/json'); header ("Content-type: image/png"); $text =$_REQUEST['text']; $text=json_encode($text); $path="C:\wamp\www\image"; $height=$_REQUEST['height']; $width=$_REQUEST['width']; define("WIDTH", $width); json_encode(WIDTH); define("HEIGHT", $height); json_encode(HEIGHT); $img = imagecreate(WIDTH, HEIGHT); imagesavealpha($img, true); $trans_colour = imagecolorallocatealpha($img, 0, 0, 0, 127); imagefill($img, 0, 0, $trans_colour); $getcolor=$_REQUEST['color']; switch($getcolor) { case 'red': $red = imagecolorallocate($img, 223, 14, 91); $color=json_encode($red); break; case 'white': $white = imagecolorallocate($img, 255, 255, 255); $color=json_encode($white); break; case 'black': $black = imagecolorallocate($img, 0,0,0); $color=json_encode($black); break; case 'grey': $grey = imagecolorallocate($img, 128, 128, 128); $color=json_encode($grey); break; // default: // break; } //$background_color = imagecolorallocate ($img, 25, 25, 25); $font = $_REQUEST['font']; //$font=json_encode($font); $fontsize =$_REQUEST['size']; //$fontsize=json_encode($fontsize); imagettftext($img, $fontsize, 0, 20, 20, $color, $font, $text); //Create image imagepng($img); imagepng($img,"$path/img.png"); //destroy image ImageDestroy($img); //header ('Content-type: image/png'); ? Thanks in advance..

    Read the article

  • Any advantage to the script version of Google Adwords' conversion tracking code?

    - by ripper234
    Google Adword has an HTML snippet to track conversions: <script type="text/javascript"> /* <![CDATA[ */ var google_conversion_id = 12345; var google_conversion_language = "en"; var google_conversion_format = "3"; var google_conversion_color = "ffffff"; var google_conversion_label = "someopaqueid"; var google_conversion_value = 0; /* ]]> */ </script> <script type="text/javascript" src="http://www.googleadservices.com/pagead/conversion.js"> </script> <noscript> <div style="display:inline;"> <img height="1" width="1" style="border-style:none;" alt="" src="http://www.googleadservices.com/pagead/conversion/12345/?label=opaque&amp;guid=ON&amp;script=0"/> </div> </noscript> It is composed of two parts: For clients supporting javascript, an inline script that sets variables, plus loading a reporting script. For other clients, an image tag. As far as I can see, the image tag has some advantages: It works on all browsers. It is asynchronous. It's shorter to have only this version, compared to both this and the js version. Any reason not to drop the <noscript> tag and just use the image conversion snippet directly?

    Read the article

  • jQuery document.ready + Asp.Net ContentPlaceholder cause Visual Studio intellisence problems

    - by Konstantin
    Hi! I want to execute JavaScript when document is ready without much syntax overhead. The idea is to use Site.Master and ContentPlaceholder: <script type="text/javascript"> $(document).ready(function () { <asp:ContentPlaceHolder ID="OnReadyScript" runat="server" /> }); </script> and in inherited pages just write plain code: <asp:Content ID="Content3" ContentPlaceHolderID="OnReadyScript" runat="server"> $("#Login").focus(); </asp:Content> It works fine but Visual Studio complains and gives warnings. Warning in master page is Expected expression at the line <asp:ContentPlaceHolder. In inherited pages warning is Could not find 'OnReadyScript' in the current master page or pages. I tried using Writer.Write in master page to render script tag and wrapping code: <% Writer.Write(@"<script type=""text/javascript"">$(document).ready(function () {"); %> <asp:ContentPlaceHolder ID="OnReadyScrit" runat="server" /> <% Writer.Write(@"});"); %> but page rendering terminates after opening script tag is rendered. Html basically ends with <script type="text/javascript"> How can I make it work?

    Read the article

  • Using Word COM objects in .NET, InlineShapes not copied from template to document

    - by Keith
    Using .NET and the Word Interop I am programmatically creating a new Word doc from a template (.dot) file. There are a few ways to do this but I've chosen to use the AttachedTemplate property, as such: Dim oWord As New Word.Application() oWord.Visible = False Dim oDocuments As Word.Documents = oWord.Documents Dim oDoc As Word.Document = oDocuments.Add() oDoc.AttachedTemplate = sTemplatePath oDoc.UpdateStyles() (I'm choosing the AttachedTemplate means of doing this over the Documents.Add() method because of a memory leak issue I discovered when using Documents.Add() to open from templates.) This works fine EXCEPT when there is an image (represented as an InlineShape) in the template footer. In that case the image does not appear in the resulting document. Specifically the image should appear in the oDoc.Sections.Item(1).Footers.Item(WdHeaderFooterIndex.wdHeaderFooterPrimary).Range.InlineShapes collection but it does not. This is not a problem when using Documents.Add(), however as I said that method is not an option for me. Is there an extra step I have to take to get the images from the template? I already discovered that when using AttachedTemplate I have to explicitly call UpdateStyles() (as you can see in my code snippet) to apply the template styles to the document, whereas that is done automatically when using Documents.Add(). Or maybe there's some crazy workaround? Your help is much appreciated! :)

    Read the article

  • Programming tips for writing document editors?

    - by Tesserex
    I'm asking this because I'm in the process of writing two such editors for my Mega Man engine, one a tileset editor, and another a level editor. When I say document editor, I mean the superset application type for things like image editors and text editors. All of these share things like toolbars, menu options, and in the case of image editors, and my apps, tool panes. We all know there's tons of advice out there for interface design in these apps, but I'm wondering about programming advice. Specifically, I'm doubting my code designs with the following things: Many menu options toggle various behaviors. What's the proper way to reliably tie the checked state of the option with the status of the behavior? Sometimes it's more complicated, like options being disabled when there's no document loaded. More and more consensus seems to be against using MDI, but how should I control tool panes? For example, I can't figure out how to get the panels to minimize and maximize along with the main window, like Photoshop does. When tool panels are responsible for a particular part of the document, who actually owns that thing? The main window, or the panel class? How do you do communication between the tool panels and the main window? Currently mine is all event based but it seems like there could be a better way. This seems to be a common class of gui application, but I've never seen specific pointers on code design for them. Could you please offer whatever advice or experience you have for writing them?

    Read the article

  • Newly created Document library and columns using webservices are not visible on sharepoint

    - by Royson
    Hi, for creating a columns I worked on this code . and for creating document library Lists listService = new Lists(); listService.PreAuthenticate = true; listService.Credentials = new NetworkCredential(username,password,domain; String url = "http://YourServer/SiteName/"; listService.Url = url @ + /_vti_bin/lists.asmx"; XmlNode ndList = listService.AddList(NewListName, "Description", 101); Both are working successfully. But Problem i am facing is: New Columns and document library are not visible. I tried with comparing Field Value of Both Visible and No-Visible types. Difference i found is : Visible (Created Manually) doesn't contain Version value. were as i am creating have it. Can you help me out in this? EDIT: I checked contents of ndList node, List is created and it is visible on my UI. but on sharepoint it should be listed in 'Document' tab where default 'Shared Documents' library is shown. If i click on 'Documents' then we can also see all lib created by this code. Visible means library displayed under 'Documents' tab

    Read the article

  • Not indent the first paragraph of a LaTeX document

    - by Andrew
    In the standard LaTeX article class (and probably others as well), paragraph indentation follows standard American publishing norms of not indenting the first paragraph after a section{} or subsection{}. I've redefined \maketitle in a LaTeX document and put the actual title left-aligned as the last line, fairly close to the actual text (kind of like this) Author Date Title Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Section title Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Since the title is left-aligned and so close to the text, I'd like the first paragraph of the document to not be indented, just like with the headings ... Title Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor... ... I've attempted to use @afterindentfalse, which is what the section commands use, inside my renewed commands, but it doesn't work. \makeatletter \def\noindentation{\let\@afterindentfalse} \newcommand{\mytitle}[1]{% \vskip 2em {\bf\sffamily\LARGE #1} \noindentation} \renewcommand{\@maketitle}{ \begin{flushleft}{ % Author \@author \par % Date \@date \par % Title \mytitle{\@title} } \end{flushleft} } \makeatother By default the first paragraph in the article class is indented, so this question is applicable whether or not I renew \maketitle. So, what's the best way to automatically not indent the first paragraph of the document? Thanks!

    Read the article

  • How to use javascript class from within document ready

    - by Richard
    Hi, I have this countdown script wrapped as an object located in a separate file Then when I want to setup a counter, the timeout function in the countdown class can not find the object again that I have setup within the document ready. I sort of get that everything that is setup in the document ready is convined to that scope, however it is possible to call functions within other document ready´s. Does anyone has a solution on how I could setup multiple counters slash objects. Or do those basic javascript classes have to become plugins This is some sample code on how the class begins function countdown(obj) { this.obj = obj; this.Div = "clock"; this.BackColor = "white"; this.ForeColor = "black"; this.TargetDate = "12/31/2020 5:00 AM"; this.DisplayFormat = "%%D%% Days, %%H%% Hours, %%M%% Minutes, %%S%% Seconds."; this.CountActive = true; this.DisplayStr; this.Calcage = cd_Calcage; this.CountBack = cd_CountBack; this.Setup = cd_Setup; } thanks, Richard

    Read the article

  • Create xml document in java applet.

    - by zproxy
    If I try to create a new xml document in a java applet by this code: http://java.sun.com/j2se/1.4.2/docs/api/javax/xml/parsers/DocumentBuilderFactory.html#newInstance() DocumentBuilderFactory.newInstance(); I will get this error: Java Plug-in 1.6.0_19 Using JRE version 1.6.0_19-b04 Java HotSpot(TM) Client VM javax.xml.parsers.FactoryConfigurationError: Provider <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> not found at javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source) I do not care about DTD's. Why is it looking for it? How am I supposed to create a xml document in java applets? How can I make it work? The enclosing html document looks like this: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <title>Loading...</title> </head> Can some comment this thread? The problem was with the entity resolver, which points to the w3c.org web site. The access to the reference DTDs on this site has been restricted for application use. The solution was to implement my own entity resolver. Related: http://forums.sun.com/thread.jspa?threadID=515055 http://stackoverflow.com/questions/1016286/org-apache-xerces-jaxp-saxparserfactoryimpl-not-found-when-importing-gears-api-in http://java.itags.org/java-desktop/4839/

    Read the article

  • Use continue or Checked Exceptions when checking and processing objects

    - by Johan Pelgrim
    I'm processing, let's say a list of "Document" objects. Before I record the processing of the document successful I first want to check a couple of things. Let's say, the file referring to the document should be present and something in the document should be present. Just two simple checks for the example but think about 8 more checks before I have successfully processed my document. What would have your preference? for (Document document : List<Document> documents) { if (!fileIsPresent(document)) { doSomethingWithThisResult("File is not present"); continue; } if (!isSomethingInTheDocumentPresent(document)) { doSomethingWithThisResult("Something is not in the document"); continue; } doSomethingWithTheSucces(); } Or for (Document document : List<Document> documents) { try { fileIsPresent(document); isSomethingInTheDocumentPresent(document); doSomethingWithTheSucces(); } catch (ProcessingException e) { doSomethingWithTheExceptionalCase(e.getMessage()); } } public boolean fileIsPresent(Document document) throws ProcessingException { ... throw new ProcessingException("File is not present"); } public boolean isSomethingInTheDocumentPresent(Document document) throws ProcessingException { ... throw new ProcessingException("Something is not in the document"); } What is more readable. What is best? Is there even a better approach of doing this (maybe using a design pattern of some sort)? As far as readability goes my preference currently is the Exception variant... What is yours?

    Read the article

  • Liferay Document Management System Workflow

    - by Rajkumar
    I am creating a DMS in Liferay. So far I could upload documents in Liferay in document library. And also i can see documents in document and media portlet. The problem is though status for the document is in pending state, the workflow is not started. Below is my code. Anyone please help. Very urgent. Folder folder = null; // getting folder try { folder = DLAppLocalServiceUtil.getFolder(10181, 0, folderName); System.out.println("getting folder"); } catch(NoSuchFolderException e) { // creating folder System.out.println("creating folder"); try { folder = DLAppLocalServiceUtil.addFolder(userId, 10181, 0, folderName, description, serviceContext); } catch (PortalException e3) { // TODO Auto-generated catch block e3.printStackTrace(); } catch (SystemException e3) { // TODO Auto-generated catch block e3.printStackTrace(); } } catch (PortalException e4) { // TODO Auto-generated catch block e4.printStackTrace(); } catch (SystemException e4) { // TODO Auto-generated catch block e4.printStackTrace(); } // adding file try { System.out.println("New File"); fileEntry = DLAppLocalServiceUtil.addFileEntry(userId, 10181, folder.getFolderId(), sourceFileName, mimeType, title, "testing description", "changeLog", sampleChapter, serviceContext); Map<String, Serializable> workflowContext = new HashMap<String, Serializable>(); workflowContext.put("event",DLSyncConstants.EVENT_CHECK_IN); DLFileEntryLocalServiceUtil.updateStatus(userId, fileEntry.getFileVersion().getFileVersionId(), WorkflowConstants.ACTION_PUBLISH, workflowContext, serviceContext); System.out.println("after entry"+ fileEntry.getFileEntryId()); } catch (DuplicateFileException e) { } catch (PortalException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (SystemException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } } catch (PortalException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (SystemException e) { // TODO Auto-generated catch block e.printStackTrace(); } } return fileEntry.getFileEntryId(); } I have even used WorkflowHandlerRegistryUtil.startWorkflowInstance(companyId, userId, fileEntry.getClass().getName(), fileEntry.getClassPK, fileEntry, serviceContext); But still i have the same problem

    Read the article

  • Does the "Supporting Multiple Screens" document contradict itself?

    - by Neil Traft
    In the Supporting Multiple Screens document in the Android Dev Guide, some example screen configurations are given. One of them states that the small-ldpi designation is given to QVGA (240x320) screens with a physical size of 2.6"-3.0". According to this DPI calculator, a 2.8" QVGA display equates to 143 dpi. However, further down the page the document explicitly states that all screens over 140 dpi are considered "medium" density. So which is it, ldpi or mdpi? Is this a mistake? Does anyone know what the HTC Tattoo or similar device actually reports? I don't have access to any devices like this. Also, with the recent publishing of this document, I'm glad to see we finally have an explicit statement of the exact DPI ranges of the three density categories. But why haven't we been given the same for the small, medium, and large screen size categories? I'd like to know the exact ranges for all these. Thanks in advance for your help!

    Read the article

  • How to stream pdf document from servlet?

    - by Kumar
    Hi,I am creating pdf document using jasper report and i need to stream that pdf document from servlet.Can anyone help me where i did mistake.This is the code snippet which i am using in my application. ServletOutputStream servletOutputStream = response.getOutputStream(); String fileName="test.pdf"; response.setContentType("application/pdf"); response.setHeader("Content-Disposition","attachment; filename=\"" + fileName + "\""); response.setHeader("Cache-Control", "no-cache"); try { Map parameters = new HashMap(); parameters.put("SUBREPORT_DIR", JasperReportFilepath); parameters.put("TestId", testID); JasperPrint jprint=JasperFillManager.fillReport(filePath, parameters, conn); byte[] output=JasperExportManager.exportReportToPdf(jprint); System.out.println("Size====>"+output.length); servletOutputStream.write(output); servletOutputStream.flush(); servletOutputStream.close(); System.out.println("===============>Streaming perfectly"); } catch(Exception e) { System.out.println("===============>+JasperException"+e.getMessage()); } and i could not get any error message also.Everything is working fine but document is not streaming. Please help me to sort out the problem.

    Read the article

  • How to make an AJAX call immediately on document loading

    - by Ankur
    I want to execute an ajax call as soon as a document is loaded. What I am doing is loading a string that contains data that I will use for an autocomplete feature. This is what I have done, but it is not calling the servlet. I have removed the calls to the various JS scripts to make it clearer. I have done several similar AJAX calls in my code but usually triggered by a click event, I am not sure what the syntax for doing it as soon as the document loads, but I thought this would be it (but it's not): <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <script type="text/javascript"> $(document).ready(function(){ $.ajax({ type: "GET", url: "AutoComplete", dataType: 'json', data: queryString, success: function(data) { var dataArray = data; alert(dataArray); } }); $("#example").autocomplete(dataArray); }); </script> <title></title> </head> <body> API Reference: <form><input id="example"> (try "C" or "E")</form> </body> </html>

    Read the article

  • SVG text parameter changing on conversion to image uri : random dy on tspan element

    - by Kitex
    Sorry that I could not compile jsfiddle because it's jsf application hosted locally and code is dependent on data from jsf application. Although I have arrange part of it and part if it as snippet here. Now Everything's correct in Firefox. Suddenly when I open it in chrome something happened. The text on raphael paper suddenly gets scattered in the paper. It's not where it's meant to be. This happens when I convert svg to image and again generate svg. Everything works fine in Firefox. There is chagne id dy of tspan dy=3.09499999 dy=432.0949999999999 Why is there this change in dy although x and y are same? SVG Correct: The fiddle is here. SVG Incorrect: The fiddle is here. function printMap(){ var svg = $('#map').html().replace(/>\s+/g, ">").replace(/\s+</g, "<"); // strips off all spaces between tags canvg('cvs', svg, { ignoreMouse: true, ignoreAnimation: true }); var canvas = document.getElementById('cvs'); var img = canvas.toDataURL("image/png"); $("#resImg").attr("src",img); $("#resImg").css("display",'block'); //$("resImg").css("display",'none'); $("#map").css("display",'none'); // location.href = img; } Before: Text are above the object: After: Texts are scattered:

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >