Search Results

Search found 5101 results on 205 pages for 'jpeg to pdf'.

Page 148/205 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Which converting format of Calibre will keep a MOBI ebook's formatting together?

    - by verve
    I've got a MOBI/Kindle book and I want to convert it to a format that's easy to markup, make notes etc. but when I try to convert to TXT or PDF the result is awful. Are any of the other formats in Calibre's conversion options easily opened on any computer; any of the other formats offered by Calibre fare better after a MOBI conversion? On a side note, are there other programs that will convert a MOBI book better than Calibre? One that will keep the formatting... Win 7. IE 9.

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • A download manager for Linux which saves downloaded files in directories by date like 2012_06_29

    - by Gart
    I've been using Download Master on Windows for years and what I liked most about it is that this program can automatically put downloaded files into directories by download date: /Downloads | |--/2012_06_28 | | | |--a.zip | |--b.pdf | ... | |--/2012_06_29 | | | |--c.txt | ... ... I'm looking for something similar for Linux. Is there any free download manager that can do this? I have tried KGet and uGet but they both seem to lack this feature. If there is a way to configure them to do that, I'll be happy to know about it. Thank you.

    Read the article

  • How much HDD space would I need to cache the web while respecting robot.txts?

    - by Koning Baard XIV
    I want to experiment with creating a web crawler. I'll start with indexing a few medium sized website like Stack Overflow or Smashing Magazine. If it works, I'd like to start crawling the entire web. I'll respect robot.txts. I save all html, pdf, word, excel, powerpoint, keynote, etc... documents (not exes, dmgs etc, just documents) in a MySQL DB. Next to that, I'll have a second table containing all restults and descriptions, and a table with words and on what page to find those words (aka an index). How much HDD space do you think I need to save all the pages? Is it as low as 1 TB or is it about 10 TB, 20? Maybe 30? 1000? Thanks

    Read the article

  • "Brute force attempt" on sending multiple emails

    - by bretddog
    While testing to send multiple emails, I successfully sent about 100 emails (with a 20KB pdf attachment), to the same email-address (my own), and they were all received. But on next attempt, my cPanel account was blocked, due to a "brute force attempt". Are there any special precautions I need to take when sending bulk emails? I simply looped through below code without pause for each email. What type of alert could that give on the email server, and how should I avoid it? client = New SmtpClient(smtp, Convert.ToInt32(port)) AddHandler client.SendCompleted, AddressOf OnAsyncSendComplete client.Credentials = New System.Net.NetworkCredential(usn, psw) client.SendAsync(mail, token) Should I wait for SendComplete event for each email before sending the next?

    Read the article

  • Write permissions on uploaded files - Linux, Apache, PHP

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • IE 8 doesn't appear to clear cache on demand. Is anyone else seeing this?

    - by Steve
    I have a client that uploads updated pdf files to her Concrete5 CMS, through the file manager, replacing the old file with the same name. She then does a cms "clear cache" and exits as she should. Then, in testing, she finds that the old file still comes up when clicking on the link. On further review, the cms file manager version tracking shows that the file has been updated, and, for me, the new file comes up, as it should, when clicking the link. My client hase also refreshed her browser cache and still, she only gets the old file when clicking on the link. She says that, while she can't seem to force an immediate cache update, overnight it appears to update. My client is also part of a large company-wide lan and intranet. Is it possible that there is a cache function placed outside of her local browser and cms cache that is not updating?

    Read the article

  • Unable to login through varnish cache

    - by ArunS
    I am setting up Active Collab Site in my new server. The setup is like below Internet --- varnish ---- apache But i am not able to login to the site through varnish cache.. But i can login to site through apache. Here is my VCL file backend default { .host = "localhost"; .port = "8080"; } acl purge { "localhost"; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_fetch { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; }} When i try to login through varnish i was redirect back to login page. If i enter wrong password, then it will ask for enter correct password.

    Read the article

  • Files deleted. What could have happened?

    - by jjfine
    I'm having a weird issue today. I was writing and testing out some simple cgi scripts this morning when I realized that I couldn't run them from one of the other computers on the (windows) network. So I had my network admin come in and take a look at what was going on. A few minutes later a co-worker came in and told me that a bunch of files he was working with as well as a bunch of others (all *.c files) on the network drive got deleted. He also noticed some strange apache_dump_500.log.txt files in the same directories where the files got deleted. The apache_dump_500.log.txt files all look like this: REDIRECT_HTTP_ACCEPT=*/*, image/gif, image/x-xbitmap, image/jpeg REDIRECT_HTTP_USER_AGENT=Mozilla/1.1b2 (X11; I; HP-UX A.09.05 9000/712) REDIRECT_PATH=.:/bin:/usr/local/bin:/etc REDIRECT_QUERY_STRING= REDIRECT_REMOTE_ADDR=<my computer's local ip> REDIRECT_REMOTE_HOST= REDIRECT_SERVER_NAME=<my computer's domain url> REDIRECT_SERVER_PORT= REDIRECT_SERVER_SOFTWARE= REDIRECT_URL=/cgi-bin/trojan.py I looked and I don't have any trojan.py in my cgi-bin folder. And all my apache logs are clean. Windows event logger seems to not have any traces of what happened either. My httpd.conf: http://pastebin.com/Yny2Yh8v I think we've got some kind of virus that added this trojan.py file to my cgi-bin, ran the script, and deleted the script and any traces from the logs. Is this a thing that happens? Any ideas whatsoever would be much appreciated!

    Read the article

  • nginx - proxy_pass is working - Apache isn't doing what it should...

    - by matthewsteiner
    So, I've got this in my nginx.conf: location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/example.com/public/; access_log off; expires 30d; } location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } So anything that is a "static file" that exists will just be done with nginx. Otherwise, it should pass it off to Apache. Right now, static files are working correctly. However, if something is passed to apache and it's example.com or subdomain.example.com, apache just spits out the "Apache 2 Test Page" that you get if there's nothing there. Apache worked fine before, so I'm guessing it has to do with the way nginx is "asking". I'm not sure though. Any ideas?

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Excel - How to count matches in data?

    - by JunkUtopia
    I am looking for patterns in the user journey of converted customers. I have each customers details in a row and then each step of the journey in it's own cell in columns, with up to 12 steps for each customer. For example if I want to find the count of every customer who at any point in their journey has for example, downloaded a pdf and contacted us via email, what formula is best suited to this? I've tried countifs but couldn't get it to work over multiple columns. Thank you.

    Read the article

  • How to enable hotlink protection without hardcoding my domain in the Apache config file?

    - by Jeff
    Been surfing around for a solution for a couple days now. How do I enable Apache hotlink protection without hardcoding my domain in the config file so I can port the code to my other domains without having to update the config file every time? This is what I have so far: RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://www\.example\.com [NC] RewriteRule \.(gif|ico|jpe|jpeg|jpg|png)$ - [NC,F,L] ... And this is what Apache suggests: SetEnvIf Referer example\.com localreferer <FilesMatch \.(jpg|png|gif)$> Order deny,allow Deny from all Allow from env=localreferer </FilesMatch> ... both of which hardcode the domain in their rules. The closest I came to finding any info that covers this is right here on ServerFault, but the conclusion was that it cannot be done. Based on my research, that appears to be true, but I didn't find any questions or commentary dedicated soley to this question. If anyone's curious, here is the link to the Apache 2 docs that cover this topic. Note that Apache variables (e.g. %{HTTP_REFERER}) can only be used in the RewriteCond text-string and the RewriteRule substitution arguments.

    Read the article

  • How to avoid index.php in Zend Framework route using Nginx rewrite

    - by Adam Benayoun
    I am trying to get rid of index.php from the default Zend Framework route. I think it should be corrected at the server level and not on the application. (Correct me if I am wrong, but I think doing it on the server side is more efficient). I run Nginx 0.7.1 and php-fpm 5.3.3 This is my nginx configuration server { listen *:80; server_name domain; root /path/to/http; index index.php; client_max_body_size 30m; location / { try_files $uri $uri/ /index.php?$args; } location /min { try_files $uri $uri/ /min/index.php?q=; } location /blog { try_files $uri $uri/ /blog/index.php; } location /apc { try_files $uri $uri/ /apc.php$args; } location ~ \.php { include /usr/local/etc/nginx/conf.d/params/fastcgi_params_local; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SERVER_NAME $http_host; fastcgi_pass 127.0.0.1:9000; } location ~* ^.+\.(ht|svn)$ { deny all; } # Static files location location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { expires max; } } Basically www.domain.com/index.php/path/to/url and www.domain.com/path/to/url serves the same content. I'd like to fix this using nginx rewrite. Any help will be appreciated.

    Read the article

  • REST-based file server

    - by Chris Wenham
    I need to be able to PUT files and GET them later using nothing but HTTP, so I went searching for something that might match the terms "REST file server" or "HTTP file server" or "REST drop-box", etc. Unfortunately, these terms bring up the wrong kind of results on Google. What I want is the equivalent of an SMB fileshare over HTTP. Some ideal features: Can PUT a file of any type at http://servername/service/any/path/I/want/document.pdf Anyone with access can GET that file at the URL I PUT it at Supports AV scanning on any new file that has been PUT Supports DELETE of existing resources (files) Our shop runs Windows, but I'd be interested to know about Unix software that can do this kind of thing, too. It's to be used in an IT department for private users only. It won't be on a public-facing IP address. Does anything like this exist?

    Read the article

  • OS X Automator empty, blank or null value.

    - by Brian
    I have some data files mostly excel, word and pdf files most of the files have no extension on them. So they are missing the .doc .xls. This data needs to be used in a Windows environment now. I have created automator apps for each of the file types I want to add the ext onto. The problem is it also adds the extension to files that already have an extension. So data.xls becomes data.xls.xls I would like to figure a way to only add the extenion to the files without extension. How do I tell the finder filter that i only want it to return files without extensions. I see how to add a line to filter by extension but I don't know how to let it know I want only blank or null or files without any extensions. Thanks

    Read the article

  • How SSD hard drive affected speed of your website (asp.net/linq/ms sql database)

    - by Sergey Osypchuk
    I have a small database (<1G) But we have a lot of complex logi? in website and client complains on render time, which is 3-5 seconds. We are not google, and thousands of users a day is our dream, so size is not a problem, but speed is important. Can anybody share with experience with SSD drives for ASP.NET (MVC)/LINQ/MS SQL based application ? How you performance increased? UPDATE: this whitepaper states that it will be 20 times faster. http://www.texmemsys.com/files/f000174.pdf

    Read the article

  • Piping the output of a program to Preview.app

    - by Abhay Buch
    I'm using an application (the dot program of the graphviz library) that generates a wide variety of file formats including PostScript and PDF. It can send the result to stdout or to a file. I'm currently sending it to a file and opening it with Preview. Is there any way to pipe the output and have it be read by Preview, so that I'd don't have to generate a file and have it lying around? This is going to be used by a number of people who won't know the internal structure of the generating script and I don't want to clutter their folders or complicate their lives. More generally, is there any way to take a program that sends its output to stdout and pass that output to an program that usually takes it's input from a file, without actually creating a file?

    Read the article

  • Error compiling PHP 5.5.9 on CentOS 6.5 during make command

    - by Chris Mancini
    Here is the error message: cc: internal compiler error: Killed (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.6/README.Bugs> for instructions. make: *** [ext/fileinfo/libmagic/apprentice.lo] Error 1 The very last thing make was processing is apprentice.lo which appears to be part of the image manipulation libraries (maybe?). I am using Ansible to provision my instance. It is a Digital Ocean single core 512MB VM. I have been using vagrant / ansible with the same config locally for dev and it has compiled fine, this is the first cloud VM I am attempting to provision. The only difference is the base image for my DO server is coming from DO and for my local dev, I built my own Vagrant box via VirtualBox from a stock CentOS basic server install. I pull it down from my DropBox. The problem has been experienced by others and reported as a php bug report My php ansible role up to the error: --- - name: Download php source get_url: url={{ php_source_url }} dest=/tmp register: get_url_result - name: untar the source package command: tar -xvf php-{{ php_version }}.tar.gz chdir=/tmp when: get_url_result.changed or php_reinstall - name: configure php 5.5 command: > ./configure --prefix={{ php_prefix }} --with-config-file-path={{ php_config_file_path }} --enable-fpm --enable-ftp --enable-mbstring --enable-pdo --enable-soap --enable-sockets=shared --enable-zip --with-curl --with-fpm-group={{ nginx_group }} --with-fpm-user={{ nginx_user }} --with-freetype-dir=/usr/lib64/ --with-gd --with-jpeg-dir=/usr/lib64/ --with-libdir=lib64 --with-mcrypt --with-openssl --with-pdo-mysql --with-pear --with-readline --with-tidy --with-xsl --with-zlib --without-pdo-sqlite --without-sqlite3 chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall - name: make clean when reinstalling command: make clean chdir=/tmp/php-{{ php_version }} when: php_reinstall - name: make php command: make chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall Thanks in advance for any help. :)

    Read the article

  • Forcing Acrobat Reader font

    - by Jack
    Hello, I have a netbook with Linpus Linux and I'm trying to open automatically generated documents with Acrobat Reader that use Verdana but without having it embedded inside the PDF file. Linpus doesn't come natively with any Verdana font so I had to install them inside /usr/share/fonts/by doing mkfontdirand fc-cacheto force a recache of the fonts. Then I've been able to select it inside other programs (eg. OpenOffice) but I'm still unable to open these PDFs. It seems that Acrobat is unable to find the font anyway. Since I have no control on how these PDFs are generated, is there a way to force Acrobat to use a specific font is the one it needs is unfound? Or maybe Acrobat needs a different kind of font configuration on Linux? Thanks in advance

    Read the article

  • Render a 3D image as a 2D vector image

    - by Clinton Blackmore
    Is there any software that will take a 3d model (in any format) and allow you to render it as a 2D vector image (preferable as either an .SVG or .PDF)? My intention is to render LEGO building instructions this way. While there are many tools that let you view them or generate nice, rasterized output, I'd really like to be able to generate vectorized output. Textures are not needed, and hidden line removal may not be needed. I could use a tool that works on any platform (although my preference is OS X, Linux, then Windows). Open source is preferred. If no one knows of a tool that does this, does anyone have a good recommendation of something to hack on and add a feature to output via Cairo?

    Read the article

  • Linux: disbale USB without disabling power

    - by Ergot
    TLDR I want toggle between the following usages of a usb-port via the terminal: use like a normal usb-port only supply energy to charge Story I recently got me something like a magna doodle that can save your drawings to pdf, which can be moved to your computer via usb afterwards. Now the thing is that you can't save anything while it's plugged in. Because it's the only way to charge it, it bugs me that I can't find a software solution and laziness I want to keep it plugged in and toggle the connection to the computer only when needed. I noticed that it's charging and usable when it is plugged in and the computer is shut down or suspened. So I guess that there's a way to do it. Tech info computer: ThinkPad X201 Linux Kernel: 3.14.5-1-ARCH "Magna doodle": Boogie Board Sync

    Read the article

  • What are current options to scan or convert a hand written note to a file on my laptop?

    - by goldenmean
    I wonder how come there are not many options when it comes to scan or convert a device which could be connected to a laptop/desktop, which could - 1] Allow me to write with a digital pen on some special surface, which is connected to my laptop and thus converts my hand written notes to a pdf/jpg/word. (Microsoft's failed attempt at windows based tablet PC in past comes to mind, but not anymore) Any such solution I can use with my laptop? 2] A document scanning device, apart from a flat bed scanner, integrated these days into multi function printers; anything that is portable enough to connect to my laptop?

    Read the article

  • Forcing Acrobat Reader font

    - by Jack
    I have a netbook with Linpus Linux and I'm trying to open automatically generated documents with Acrobat Reader that use Verdana but without having it embedded inside the PDF file. Linpus doesn't come natively with any Verdana font so I had to install them inside /usr/share/fonts/by doing mkfontdirand fc-cacheto force a recache of the fonts. Then I've been able to select it inside other programs (eg. OpenOffice) but I'm still unable to open these PDFs. It seems that Acrobat is unable to find the font anyway. Since I have no control on how these PDFs are generated, is there a way to force Acrobat to use a specific font is the one it needs is unfound? Or maybe Acrobat needs a different kind of font configuration on Linux? Thanks in advance

    Read the article

  • The very bare minimum of latex to compile documents

    - by ldigas
    I'm creating a relatively small console program which will be used by some other people as well. As part of its output it will be creating a tex file which will contain some two tables, a few rows of text and one plot. Not, my program is pretty small - under a Mb. The problem is I can't count on my users to have latex installed, so I'd like to include the very bare minimum required files to create it (pdf). What would be a good place to start searching on that topic, or even better, does anyone know what I would need to include with it to accomplish that ? I remember my last latex install being pretty ... well, gigantic. Kind regards !

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >