Search Results

Search found 5055 results on 203 pages for 'pdf converter'.

Page 75/203 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Firefox fails to detect content type set by PHP

    - by Ying
    Hi all, I want a php page to 'display' a pdf. Here is the code: <?php header("Content-type: application/pdf"); readfile('Reportage - Berlin.pdf'); //tried echo(readfile(...)) as well ?> Not very complicated I think, but somehow firefox cant detect that this is a pdf. This works in Safari but in firefox, i get a prompt to download the file, so i get like a pdftest.php file. I know im getting my file because if I rename the extension to pdf, i can open it. This seems too simple! am i missing something?

    Read the article

  • Open a new tab in a browser with the response to an ASP request

    - by user89691
    It's a bit complicated this one... Lets say I have a listing of PDF files displayed in the user's browser. Each filename is a link pointing not to the file, but to an ASP page, say <--a href="viewfile.asp?file=somefile.pdf">somefile.pdf</a> I want viewfile.asp to fetch the file (I've done that bit OK) but I then want the file to be loaded by the browser as if the user had opened the PDF file directly. And I want it to open in a new tab or browser window. here's (simplified) viewfile.asp: <% var FileID = Request.querystring ("file") ; var ResponseBody = MyGETRequest (SomeURL + FileID) ; if (MyHTTPResult == 200) { if (ExtractFileExt (FileID).toLowerCase = "pdf") { ?????? // return file contents in new browser tab } .... %>

    Read the article

  • Using Regex groups in bash

    - by AlexeyMK
    Greetings, I've got a directory with a list of pdfs in it: file1.pdf, file2.pdf, morestuff.pdf ... etc. I want to convert these pdfs to pngs, ie file1.png, file2.png, morestuff.png ... etc. The basic command is, convert from to, But I'm having trouble getting convert to rename to the same file name. The obvious 'I wish it worked this way' is convert *.pdf *.png But clearly that doesn't work. My thought process is that I should utilize regular expression grouping here, to say somethink like convert (*).pdf %1.png but that clearly isn't the right syntax. I'm wondering what the correct syntax is, and whether there's a better approach (that doesn't require jumping into perl or python) that I'm ignoring. Thanks!

    Read the article

  • Why the following upload if condition does not work?

    - by Ole Media
    So I have an upload script, and I want to check the file type that is being uploaded. I only want pdf, doc, docx and text files So I have: $goodExtensions = array('.doc','.docx','.txt','.pdf', '.PDF'); $name = $_FILES['uploadFile']['name']; $extension = substr($name, strpos($name,'.'), strlen($name)-1); if(!in_array($extension,$goodExtensions) || (($_FILES['uploadFile']['type'] != "applicatioin/msword") || ($_FILES['uploadFile']['type'] != "application/pdf"))){ $error['uploadFile'] = "File not allowed. Only .doc, .docx, .txt and pdf"; } Why I'm getting the error when testing and including correct documents?

    Read the article

  • Why the followind upload if condition does not work?

    - by Ole Media
    So I have an upload script, and I want to check the file type that is being uploaded. I only want pdf, doc, docx and text files So I have: $goodExtensions = array('.doc','.docx','.txt','.pdf', '.PDF'); $name = $_FILES['uploadFile']['name']; $extension = substr($name, strpos($name,'.'), strlen($name)-1); if(!in_array($extension,$goodExtensions) || (($_FILES['uploadFile']['type'] != "applicatioin/msword") || ($_FILES['uploadFile']['type'] != "application/pdf"))){ $error['uploadFile'] = "File not allowed. Only .doc, .docx, .txt and pdf"; } Why I'm getting the error when testing and including correct documents?

    Read the article

  • Reload Document into Google Docs Viewer (Clear Cache)

    - by Adam
    Google Docs Viewer (http://docs.google.com/viewer) creates a cache of a document after the first viewing. To see what I mean, try the following: Upload file.pdf to your server (i.e., http://example.com). Visit http://docs.google.com/viewer?url=http://example.com/file.pdf Upload a new file to replace file.pdf (but use the same name). Revisit http://docs.google.com/viewer?url=http://example.com/file.pdf. Google Docs Viewer still shows the old file.pdf. Anyone know how to correct this? (I have already tried clearing browser cache, switching browsers, and logging in with a different google account to view the link.)

    Read the article

  • Get license file from a folder in C# project

    - by daft
    I have a license file that I need to access at runtime in order to create pdf-files. After I have created the in memory pdf, I need to call a method on that pdf to set the license, like this: pdf.SetLicense("pathToLicenseFileHere"); The license file is located in the same project as the.cs-file that creates the pdf, but is in a separate folder. I cannot get this simple thing to behave correctly, which makes me a bit sad, since it really shouldn't be that hard. :( I try to set the path like this: string path = @"\Resources\File.lic"; But it just isn't working out for me.

    Read the article

  • Workaround: XNA 4 importing only part of 3d model from FBX

    - by Vitus
    Recently I found a problem with importing 3D models from FBX files: it sometimes imported partly. That is when you draw a 3D model, loaded from FBX file, processed by content pipeline, you got only part of meshes. “Sometimes” means that you got this error only for some files. Results of my research below. For example, I have 10Mb binary FBX file with a model, looks like: And when I load it, result Model instance contains only part of meshes and looks like: Because models from other files imported normally, I think that it’s a “bad format” file. When you add FBX file to your XNA Content project and build it, imported file processing by XNA Fbx Importer & Processor. On MSDN I found that FbxImporter designed to work with 2006.11 version of FBX format. My file is FBX 2012 format. Ok, I need to convert it to 2006 format. It can be done by using Autodesk FBX Converter 2012.1. I tried to convert it to other versions of FBX formats, but without success. And I also tried to import my FBX file to 3D MAX, and it imported correctly. Then I export model using 3D MAX, and it generate me other FBX, which I add to my XNA project. After that I got full model, that rendered well! So, internal data structure of FBX file is more important for right XNA import, than it version! Unfortunately, Autodesk FBX is not an open file format. If you want to work with FBX, you should use Autodesk FBX SDK. This way you can manually read content of FBX file, and use it everyway. Then I tried to convert my source FBX file to DAE Collada, and result DAE file back to FBX, using FBX Converter (FBX –> DAE –> FBX). The result FBX file can be imported normally.   Conclusion: XNA FbxImporter correct work doesn't depend on version (2006, 2011, etc) and form (binary, ascii) of FBX file. Internal FBX data structure much more important. To make FBX "readable" for XNA Importer you can use double conversion like FBX -> Collada -> FBX You also can use FBX SDK to manually load data from FBX P.S. Autodesk FBX Converter 2012 is more, than simple converter. It provide you tools like: FBX Explorer, which show you structure of FBX file; FBX Viewer, which render content of FBX and provide basic intercation like model move and zoom; FBX Take Manager, which allow to work with embedded animations

    Read the article

  • When I run Android SDK from terminal, it shows error. How to fix it?

    - by diflame
    Exception in thread "main" java.lang.UnsatisfiedLinkError: no swt-gtk-3550 or swt-gtk in swt.library.path, java.library.path or the jar file at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.C.<clinit>(Unknown Source) at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source) at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source) at org.eclipse.swt.widgets.Display.<clinit>(Unknown Source) at com.android.sdkmanager.Main.showSdkManagerWindow(Main.java:328) at com.android.sdkmanager.Main.doAction(Main.java:316) at com.android.sdkmanager.Main.run(Main.java:118) at com.android.sdkmanager.Main.main(Main.java:101)

    Read the article

  • Should Item Grouping/Filter be in the ViewModel or View layer?

    - by ronag
    I'm in a situation where I have a list of items that need to be displayed depending on their properties. What I'm unsure of is where is the best place to put the filtering/grouping logic of the viewmodel state? Currently I have it in my view using converters, but I'm unsure whether I should have the logic in the viewmodel? e.g. ViewModel Layer: class ItemViewModel { DateTime LastAccessed { get; set; } bool IsActive { get; set; } } class ContainerViewModel { ObservableCollection<Item> Items {get; set;} } View Layer: <TextView Text="Active Items"/> <List ItemsSource={Binding Items, Converter=GroupActiveItemsByDay}/> <TextView Text="Active Items"/> <List ItemsSource={Binding Items, Converter=GroupInActiveItemsByDay}/> or should I build it like this? ViewModel Layer: class ContainerViewModel { ObservableCollection<IGrouping<string, Item>> ActiveItemsByGroup {get; set;} ObservableCollection<IGrouping<string, Item>> InActiveItemsByGroup {get; set;} } View Layer: <TextView Text="Active Items"/> <List ItemsSource={Binding ActiveItemsGroupByDate }/> <TextView Text="Active Items"/> <List ItemsSource={Binding InActiveItemsGroupByDate }/> Or maybe something in between? ViewModel Layer: class ContainerViewModel { ObservableCollection<IGrouping<string, Item>> ActiveItems {get; set;} ObservableCollection<IGrouping<string, Item>> InActiveItems {get; set;} } View Layer: <TextView Text="Active Items"/> <List ItemsSource={Binding ActiveItems, Converter=GroupByDate }/> <TextView Text="Active Items"/> <List ItemsSource={Binding InActiveItems, Converter=GroupByDate }/> I guess my question is what is good practice in terms as to what logic to put into the ViewModel and what logic to put into the Binding in the View, as they seem to overlap a bit?

    Read the article

  • Abcpdf throwing System.ExecutionEngineException

    - by Tom Tresansky
    I have the binary for several pdf files stored in a collection of Byte arrays. My goal is to concatenate them into a single .pdf file using abcpdf, then stream that newly created file to the Response object on a page of an ASP.Net website. Had been doing it like this: BEGIN LOOP ... 'Create a new Doc Dim doc As Doc = New Doc 'Read the binary of the current PDF doc.Read(bytes) 'Append to the master merged PDF doc _mergedPDFDoc.Append(Doc) END LOOP Which was working fine 95% of the time. Every now and then however, creating a new Doc object would throw a System.ExecutionEngineException and crash the CLR. It didn't seem to be related to a large number of pdfs (sometimes would happen w/ only 2), or with large sized pdfs. It seemed almost completely random. This is a known bug in abcpdf described (not very well) here Item 6.24. I came across a helpful SO post which suggested using a Using block for the abcpdf Doc object. So now I'm doing this: Using doc As New Doc 'Read the binary of the current PDF doc.Read(bytes) 'Append to the master merged PDF doc _mergedPDFDoc.Append(doc) End Using And I haven't seen the problem occur again yet, and have been pounding on a test version as best as I can to get it to. Has anyone had any similar experience with this error? Did this fix it?

    Read the article

  • function to get the file name of an URL

    - by user262325
    Hello everyone I have some source code to get the file name of an url for example: http://www.google.com/a.pdf I hope to get a.pdf because the way to join 2 NSStrings I can get is 'appendString' which only for adding a string at right side, so I planned to check each char one by one from the right side of string 'http://www.google.com/a.pdf', when it reach at the char '/', stop the checking, return string fdp.a , after that I change fdp.a to a.pdf source codes are below -(NSMutableString *) getSubStringAfterH : originalString:(NSString *)s0 { NSInteger i,l; l=[s0 length]; NSMutableString *h=[[NSMutableString alloc] init]; NSMutableString *ttt=[[NSMutableString alloc] init ]; for(i=l-1;i>=0;i--) //check each char one by one from the right side of string 'http://www.google.com/a.pdf', when it reach at the char '/', stop { ttt=[s0 substringWithRange:NSMakeRange(i, 1)]; if([ttt isEqualToString:@"/"]) { break; } else { [h appendString:ttt]; } } [ttt release]; //below are to change the sequence of char in h // txt.edcba -> abcde.txt NSMutableString *h1=[[[NSMutableString alloc] initWithFormat:@""] autorelease]; for (i=[h length]-1;i>=0;i--) { NSMutableString *t1=[[NSMutableString alloc] init ]; t1=[h substringWithRange:NSMakeRange(i, 1)]; [h1 appendString:t1]; [t1 release]; } [h release]; return h1; } h1 can reuturn the coorect string a.pdf, but if it returns to the codes where it was called, after a while system reports 'double free * set a breakpoint in malloc_error_break to debug' I checked a long time and foudn that if I removed the code ttt=[s0 substringWithRange:NSMakeRange(i, 1)]; everything will be Ok (of course getSubStringAfterH can not returns the corrent result I expected.), no error reported. I try to fix the bug a few hours, but still no clue. Welcome any comment Thanks interdev

    Read the article

  • R: dev.copy2pdf, multiple graphic devices to a single file, how to append to file?

    - by Timtico
    Hi everybody, I have a script that makes barplots, and opens a new window when 6 barplots have been written to the screen and keeps opening new graphic devices whenever necessary. Depending on the input, this leaves me with a potential large number of openened windows (graphic devices) which I would like to write to a single PDF file. Considering my Perl background, I decided to iterate over the different graphics devices, printing them out one by one. I would like to keep appending to a single PDF file, but I do not know how to do this, or if this is even possible. I would like to avoid looping in R. :) The code I use: for (i in 1:length(dev.list()) { dev.set(which = dev.list()[i] dev.copy2pdf(device = quartz, file = "/Users/Tim/Desktop/R/Filename.pdf") } However, this is not working as it will overwrite the file each time. Now is there an append function in R, like there is in Perl. Which allows me to keep adding pages to the existing pdf file? Or is there a way to contain the information in a graphic window to a object, and keep adding new graphic devices to this object and finally print the whole thing to a file? Other possible solutions I thought about: writing different pdf files, combining them after creation (perhaps even possible in R, with the right libraries installed?) copying the information in all different windows to one big graphic device and then print this to a pdf file.

    Read the article

  • MultipartFormDataContent Access to patch xx is denied

    - by Florian Schaal
    So I'm trying to upload a pdf file to a restapi. For some reason I the application cant get access to the files on my pc. The code im using to upload: public void Upload(string token, string FileName, string FileLocation, string Name, int TypeId, int AddressId, string CompanyName, string StreetNr, string Zip, string City, string CountryCode, string CustomFieldName, string CustomFieldValue) { var client = new HttpClient(); client.BaseAddress = _API.baseAddress; //upload a new form client.DefaultRequestHeaders.Date = DateTime.Now; client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue(token); using (var multiPartContent = new MultipartFormDataContent()) { //get te bytes from a file byte[] pdfData; using (var pdf = new FileStream(@FileLocation, FileMode.Open))//Here i get the error. { pdfData = new byte[pdf.Length]; pdf.Read(pdfData, 0, (int)pdf.Length); } var fileContent = new ByteArrayContent(pdfData); fileContent.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment") { FileName = FileName + ".pdf" }; //add the bytes to the multipart message multiPartContent.Add(fileContent); //make a json message var json = new FormRest { Name = Name, TypeId = TypeId, AddressId = AddressId, CompanyName = CompanyName, StreetNr = StreetNr, Zip = Zip, City = City, CountryCode = CountryCode, CustomFields = new List<CustomFieldRest> { new CustomFieldRest {Name = CustomFieldName, Value = CustomFieldValue} } }; var Content = new JsonContent(json); //add the json message to the multipart message multiPartContent.Add(Content); var result = client.PostAsync("forms", multiPartContent).Result; } } }

    Read the article

  • What is the best way to handle autorotation with multiple subview?

    - by thangnguyen
    I am learning and programing an application. I do this project based on what I learned from the book Beginning iphone 3 development. I have two main questions here: I would like to create a multi utility application so I need multiple-view. I have a main view controller which will control switching between views. In this example I have two views A and B. I have 2 view controller A and B which will handles all of events on these 2 views. I have 2 nib files viewA.xib and viewB.xib. One of the uitility is reading PDF. So I create another class which handle the PDF file which can load a PDF page call PDFview. From Interface Builder, I selected class identity for view of the viewB.xib as PDFView class. The result is I can switching between View A and view B. View B will display the content of the PDF page. I am not sure if my solution is right or wrong but now I don't know how to handle the autorotation. The rotation will active the view controller B. But the PDFView handle how to display the PDF on the View. Could you please tell me how I should handle this in a right way? Second question: Should I create the subview automatically? In case I need to do the swipe page animation, how can I do that? I think that I need to load another subview so I can do the animation when swap view. But I think this solution will waste the resource. I can just load another page of the PDF, but in this case I don't know how to use animation? Please tell me how I should solve this? I highly appreciate your time reading and answering my question. Thang Nguyen

    Read the article

  • Drill through table does not show correct count when used with a dimension having parent child hiera

    - by Arun Singhal
    Hi All, I have a dimension with parent child hierarchy as shown in code block. The issue i am facing is if i have a filter on parent child dimension then drill through table does not show filtered data instead it shows all the data for that dimension. Here is an example. <Dimension type="StandardDimension" name="page_type_d" caption="Page Type"> <Hierarchy name="page_type_h" hasAll="true" allMemberName="all_page_types" allMemberCaption="All Page Types" primaryKey="id"> <Table name="npg_page_type_view" alias="pt"> </Table> <Level name="Page Type" column="id" nameColumn="display_name" parentColumn="parent_id" nullParentValue="0" type="Integer" uniqueMembers="true" levelType="Regular" hideMemberIf="Never" caption="Page Type"> <Closure parentColumn="parent_id" childColumn="page_type_id"> <Table name="dim_page_types_closure"> </Table> </Closure> </Level> </Hierarchy> Now suppose i have 4 rows in npg_page_type_view table id display_name parent_id 19 HTML 100 20 PDF 100 21 XML 0 100 Total 0 Now suppose in my fact table i have following records id count 19 2 20 3 21 1 Following is my analysis view. Total (HTML and PDF) - 5 HTML - 2 PDF - 3 XML - 1 Now if i add filter(say Total) on this analysis view using OLAP cube. Then my analysis view shows the following. Total (HTML and PDF) - 5 Upto this point everything works fine. Now if i click on 5 (to view drill through table) It shows me data against all page type i.e. HTML, PDF, XML but as per filter it should show only HTML and PDF. Is it an exciting issue or am i doing something wrong here? Please help me.

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • P values in wilcox.test gone mad :(

    - by Error404
    I have a code that isn't doing what it should do. I am testing P value for a wilcox.test for a huge set of data. the code i am using is the following library(MASS) data1 <- read.csv("file1path.csv",header=T,sep=",") data2 <- read.csv("file2path.csv",header=T,sep=",") data3 <- read.csv("file3path.csv",header=T,sep=",") data4 <- read.csv("file4path.csv",header=T,sep=",") data1$K <- with(data1,{"N"}) data2$K <- with(data2,{"E"}) data3$K <- with(data3,{"M"}) data4$K <- with(data4,{"U"}) new=rbind(data1,data2,data3,data4) i=3 for (o in 1:4800){ x1 <- data1[,i] x2 <- data2[,i] x3 <- data3[,i] x4 <- data4[,i] wt12 <- wilcox.test(x1,x2, na.omit=TRUE) wt13 <- wilcox.test(x1,x3, na.omit=TRUE) wt14 <- wilcox.test(x1,x4, na.omit=TRUE) if (wt12$p.value=="NaN"){ print("This is wrong") } else if (wt12$p.value < 0.05){ print(wt12$p.value) mypath=file.path("C:", "all1-less-05", (paste("graph-data1-data2",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } if (wt13$p.value=="NaN"){ print("This is wrong") } else if (wt13$p.value < 0.05){ print(wt13$p.value) mypath=file.path("C:", "all2-less-05", (paste("graph-data1-data3",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } if (wt14$p.value=="NaN"){ print("This is wrong") } else if (wt14$p.value < 0.05){ print(wt14$p.value) mypath=file.path("C:", "all3-less-05", (paste("graph-data1-data4",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } i=i+1 } I am having 2 problems with this long command: 1- Without specifying a certain P value, the code gives me arouind 14,000 graphs, when specifying a p value less than 0.05 the number of graphs generated goes down to 9,0000. THE FIRST PROBLEM IS: Some P value are more than 0.05 and are still showing up! 2- I designed the program to give me a result of "This is wrong" when the Value of P is "NaN", I am getting results of "NaN" Here's a screenshot from the results do you know what the mistake i made with the command to get these errors? Thanks in advance

    Read the article

  • Mysql Database Question about Large Columns

    - by murat
    Hi, I have a table that has 100.000 rows, and soon it will be doubled. The size of the database is currently 5 gb and most of them goes to one particular column, which is a text column for PDF files. We expect to have 20-30 GB or maybe 50 gb database after couple of month and this system will be used frequently. I have couple of questions regarding with this setup 1-) We are using innodb on every table, including users table etc. Is it better to use myisam on this table, where we store text version of the PDF files? (from memory usage /performance perspective) 2-) We use Sphinx for searching, however the data must be retrieved for highlighting. Highlighting is done via sphinx API but still we need to retrieve 10 rows in order to send it to Sphinx again. This 10 rows may allocate 50 mb memory, which is quite large. So I am planning to split these PDF files into chunks of 5 pages in the database, so these 100.000 rows will be around 3-4 million rows and couple of month later, instead of having 300.000-350.000 rows, we'll have 10 million rows to store text version of these PDF files. However, we will retrieve less pages, so again instead of retrieving 400 pages to send Sphinx for highlighting, we can retrieve 5 pages and it will have a big impact on the performance. Currently, when we search a term and retrieve PDF files that have more than 100 pages, the execution time is 0.3-0.35 seconds, however if we retrieve PDF files that have less than 5 pages, the execution time reduces to 0.06 seconds, and it also uses less memory. Do you think, this is a good trade-off? We will have million of rows instead of having 100k-200k rows but it will save memory and improve the performance. Is it a good approach to solve this problem and do you have any ideas how to overcome this problem? The text version of the data is used only for indexing and highlighting. So, we are very flexible. Thanks,

    Read the article

  • Having trouble uploading a file

    - by neo skosana
    Hi I am having trouble uploading a file. First of all I have a class: class upload { private $name; private $document; public function __construct($nme,$doc) { $this->setName($nme); $this->setDocument($doc); } public function setName($nme) { $this->name = $nme; } public function setDocument($doc) { $this->document = $doc; } public function fileNotPdf() { /* Was the file a PDF? */ if ($this->document['type'] != "application/pdf") { return true; } else { return false; } } public function fileNotUploaded() { /* Make sure that the file was POSTed. */ if (!(is_uploaded_file($this->document['tmp_name']))) { return true; } else { return false; } } public function fileNotMoved($repositry) { /* move uploaded file to final destination. */ $result = move_uploaded_file($this->document['tmp_name'], "$repositry/$this->name.pdf"); if($result) { return false; } else { return true; } } } Now for my main page: $docName = $_POST['name']; $page = $_FILES['doc']; if($_POST['submit']) { /* Set a few constants */ $filerepository = "np"; $uploadObj = new upload($docName, $page); if($uploadObj->fileNotUploaded()) { promptUser("There was a problem uploading the file.",""); } elseif($uploadObj->fileNotPdf()) { promptUser("File must be in pdf format.",""); } elseif($uploadObj->fileNotMoved($filerepository)) { promptUser("File could not be uploaded to final destination.",""); } else { promptUser("File has been successfully uploaded.",""); } } The errors that I get: Warning: move_uploaded_file(about.pdf)[function.move-uploaded-file]: failed to open stream: No such file or directory in... Warning: move_uploaded_file()[function.move-uploaded-file]: Unable to move 'c:\xampp\tmp\php13.tmp' to 'about.pdf' in... File could not be uploaded to final destination.

    Read the article

  • Finding "Stuff" In OUM

    - by Dave Burke
    One of the first questions people asked when they start using the Oracle Unified Method (OUM) is “how do I find X ?” Well of course no one is really looking for “X”!! but typically an OUM user might know the Task ID, or part of the Task Name, or maybe they just want to find out if there is any content within OUM that is related to a couple of keys words they have in their mind. Here are three quick tips I give people: 1. Open up one of the OUM Views, then click “Expand All”, and then use your Browser’s search function to locate a key Word. For example, Google Chrome or Internet Explorer: <CTRL> F, then type in a key Word, i.e. Architecture This is fast and easy option to use, but it only searches the current OUM page 2. Use the PDF view of OUM Open up one of the OUM Views, and then click the PDF View button located at the top of the View. Depending on your Browser’s settings, the PDF file will either open up in a new Window, or be saved to your local machine. In either case, once the PDF file is open, you can use the built in PDF search commands to search for key words across a large portion of the OUM Method Pack. This is great option for searching the entire Full Method View of OUM, including linked HTML pages, however the search will not included linked Documents, i.e. Word, Excel. 3. Use your operating systems file index to search for key words This is my favorite option, and one I use virtually every day. I happen to use Windows Search, but you could also use Google Desktop Search, of Finder on a MAC. All you need to do (on a Windows machine) is to make sure your local OUM folder structure is included in the Windows Index. Go to Control Panel, select Indexing Options, and ensure your OUM folder is included in the Index, i.e. C:/METHOD/OM40/OUM_5.6 Once your OUM folders are indexed, just open up Windows Search (or Google Desktop Search) and type in your key worlds, i.e. Unit Testing The reason I use this option the most is because the Search will take place across the entire content of the Indexed folders, included linked files. Happy searching!

    Read the article

  • Write binary stream to browser using PHP

    - by Dave Jarvis
    Background Trying to stream a PDF report written using iReport through PHP to the browser. The general problem is: how do you write binary data to the browser using PHP? Working Code The following code does the job, but (for many reasons) it is not as efficient as it should be (the code writes a file then sends the file contents the browser). // Load the MySQL database driver. // java( 'java.lang.Class' )->forName( 'com.mysql.jdbc.Driver' ); // Attempt a database connection. // $conn = java( 'java.sql.DriverManager' )->getConnection( "jdbc:mysql://localhost:3306/climate?user=$user&password=$password" ); // Extract parameters. // $params = new java('java.util.HashMap'); $params->put('DistrictCode', '101'); $params->put('StationCode', '0066'); $params->put('CategoryCode', '010'); // Use the fill manager to produce the report. // $fm = java('net.sf.jasperreports.engine.JasperFillManager'); $pm = $fm->fillReport($report, $params, $conn); header('Cache-Control: no-cache private'); header('Content-Description: File Transfer'); header('Content-Disposition: attachment, filename=climate-report.pdf'); header('Content-Type: application/pdf'); header('Content-Transfer-Encoding: binary'); header('Content-Length: ' . strlen( $result ) ); $path = realpath( "." ) . "/output.pdf"; $em = java('net.sf.jasperreports.engine.JasperExportManager'); $result = $em->exportReportToPdfFile($pm,$path); readfile( $path ); $conn->close(); Non-working Code To remove the slight redundancy (i.e., write directly to the browser), the following code looks like it should work, but it does not: $em = java('net.sf.jasperreports.engine.JasperExportManager'); $result = $em->exportReportToPdf($pm); header('Content-Length: ' . strlen( $result ) ); echo $result; Content is sent to the browser, but the file is corrupt (it begins with the PDF header) and cannot be read by any PDF reader. Question How can I take out the middle step of writing to the file and write directly to the browser so that the PDF is not corrupted? Thank you!

    Read the article

  • How can i convert this to a factory/abstract factory?

    - by Amitd
    I'm using MigraDoc to create a pdf document. I have business entities similar to the those used in MigraDoc. public class Page{ public List<PageContent> Content { get; set; } } public abstract class PageContent { public int Width { get; set; } public int Height { get; set; } public Margin Margin { get; set; } } public class Paragraph : PageContent{ public string Text { get; set; } } public class Table : PageContent{ public int Rows { get; set; } public int Columns { get; set; } //.... more } In my business logic, there are rendering classes for each type public interface IPdfRenderer<T> { T Render(MigraDoc.DocumentObjectModel.Section s); } class ParagraphRenderer : IPdfRenderer<MigraDoc.DocumentObjectModel.Paragraph> { BusinessEntities.PDF.Paragraph paragraph; public ParagraphRenderer(BusinessEntities.PDF.Paragraph p) { paragraph = p; } public MigraDoc.DocumentObjectModel.Paragraph Render(MigraDoc.DocumentObjectModel.Section s) { var paragraph = s.AddParagraph(); // add text from paragraph etc return paragraph; } } public class TableRenderer : IPdfRenderer<MigraDoc.DocumentObjectModel.Tables.Table> { BusinessEntities.PDF.Table table; public TableRenderer(BusinessEntities.PDF.Table t) { table =t; } public MigraDoc.DocumentObjectModel.Tables.Table Render(Section obj) { var table = obj.AddTable(); //fill table based on table } } I want to create a PDF page as : var document = new Document(); var section = document.AddSection();// section is a page in pdf var page = GetPage(1); // get a page from business classes foreach (var content in page.Content) { //var renderer = createRenderer(content); // // get Renderer based on Business type ?? // renderer.Render(section) } For createRenderer() i can use switch case/dictionary and return type. How can i get/create the renderer generically based on type ? How can I use factory or abstract factory here? Or which design pattern better suits this problem?

    Read the article

  • Yet another blog about IValueConverter

    - by codingbloke
    After my previous blog on a Generic Boolean Value Converter I thought I might as well blog up another IValueConverter implementation that I use. The Generic Boolean Value Converter effectively converters an input which only has two possible values to one of two corresponding objects.  The next logical step would be to create a similar converter that can take an input which has multiple (but finite and discrete) values to one of multiple corresponding objects.  To put it more simply a Generic Enum Value Converter. Now we already have a tool that can help us in this area, the ResourceDictionary.  A simple IValueConverter implementation around it would create a StringToObjectConverter like so:- StringToObjectConverter using System; using System.Windows; using System.Windows.Data; using System.Linq; using System.Windows.Markup; namespace SilverlightApplication1 {     [ContentProperty("Items")]     public class StringToObjectConverter : IValueConverter     {         public ResourceDictionary Items { get; set; }         public string DefaultKey { get; set; }                  public StringToObjectConverter()         {             DefaultKey = "__default__";         }         public virtual object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             if (value != null && Items.Contains(value.ToString()))                 return Items[value.ToString()];             else                 return Items[DefaultKey];         }         public virtual object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             return Items.FirstOrDefault(kvp => value.Equals(kvp.Value)).Key;         }     } } There are some things to note here.  The bulk of managing the relationship between an object instance and the related string key is handled by the Items property being an ResourceDictionary.  Also there is a catch all “__default__” key value which allows for only a subset of the possible input values to mapped to an object with the rest falling through to the default. We can then set one of these up in Xaml:-             <local:StringToObjectConverter x:Key="StatusToBrush">                 <ResourceDictionary>                     <SolidColorBrush Color="Red" x:Key="Overdue" />                     <SolidColorBrush Color="Orange" x:Key="Urgent" />                     <SolidColorBrush Color="Silver" x:Key="__default__" />                 </ResourceDictionary>             </local:StringToObjectConverter> You could well imagine that in the model being bound these key names would actually be members of an enum.  This still works due to the use of ToString in the Convert method.  Hence the only requirement for the incoming object is that it has a ToString implementation which generates a sensible string instead of simply the type name. I can’t imagine right now a scenario where this converter would be used in a TwoWay binding but there is no reason why it can’t.  I prefer to avoid leaving the ConvertBack throwing an exception if that can be be avoided.  Hence it just enumerates the KeyValuePair entries to find a value that matches and returns the key its mapped to. Ah but now my sense of balance is assaulted again.  Whilst StringToObjectConverter is quite happy to accept an enum type via the Convert method it returns a string from the ConvertBack method not the original input enum type that arrived in the Convert.  Now I could address this by complicating the ConvertBack method and examining the targetType parameter etc.  However I prefer to a different approach, deriving a new EnumToObjectConverter class instead. EnumToObjectConverter using System; namespace SilverlightApplication1 {     public class EnumToObjectConverter : StringToObjectConverter     {         public override object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             string key = Enum.GetName(value.GetType(), value);             return base.Convert(key, targetType, parameter, culture);         }         public override object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)         {             string key = (string)base.ConvertBack(value, typeof(String), parameter, culture);             return Enum.Parse(targetType, key, false);         }     } }   This is a more belts and braces solution with specific use of Enum.GetName and Enum.Parse.  Whilst its more explicit in that the a developer has to  choose to use it, it is only really necessary when using TwoWay binding, in OneWay binding the base StringToObjectConverter would serve just as well. The observant might note that there is actually no “Generic” aspect to this solution in the end.  The use of a ResourceDictionary eliminates the need for that.

    Read the article

  • Linux Email Server Auto-Reply

    - by Robert Smith
    I need to setup a mail server that has the following functionality: if a user sends an email to a specific address on this server, the server must first check if the email has a PDF attachment, do some processing to that PDF file and then reply to the user's initial mail with the new PDF file attached. My question is how would it be possible to achieve this functionality, and what software / mail server do you recommend? I'm thinking that it can be solved the following way: when the server receives a new email it executes an external Python script that checks the attachment, processes the PDF file and then sends it back in the user's mailbox. What mail server would be able to do this, and what configurations does it need?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >