Search Results

Search found 45245 results on 1810 pages for 'html content extraction'.

Page 214/1810 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Apache only transferring partial content from a Samba share

    - by thaBadDawg
    I have an Apache server running on CentOS 5.3. It currently hosts 12 sites with no known issues. (I say this to point out that up to this point my Apache installation has performed flawlessly) I'm adding a new site where the DocumentRoot of the new VirtualHost is a Samba share. When at the command line of the server I can cp video.m4v ~ and the whole file is copied properly to my home directory. But when I try to access the file from IE/Firefox/Safari/Chrome it only passes back a partial result of 33k. The same thing is happening with my image and audio files. If I make the files local to the server by copying them from the share and then serving them up then the files transfer. Any ideas?

    Read the article

  • Recommendations for VMWare web server environment with load balancer.

    - by Ben
    We run IIS websites on a VMWare production server that pull image content and video content from a separate IIS instance on another server (media server). The media calls (images and video) are straight http:// calls and not using a streaming application. During peak traffic periods, we clone the production server five times and have a load balancer distribute traffic to all five production servers. The media server does not get ramped up. We noticed that the processing and resources on the media server gets very taxed during this period. Would it make sense to run the IIS instance for the media server locally on the production server and have it cloned with the production servers, then have a rule on the load balancer negotiating these media calls from the website? Would it be better to allocate more resources (memory and CPUs) to the media server VM and not clone it with the production servers? Recommendations are sincerely appreciated.

    Read the article

  • Disable the user of Internet explorer through policies when called from HTML help

    - by Stephane
    Hello, I have a locked down environment where users are prohibited from doing, well, basically anything but run the specific programs we specify. We just switched a program from using the venerable "WinHELP" help format to HTML help (CHM) but that seem to have an unwanted and rather dangerous side effect: when a user click on a hyperlink inside the HTML help, a new internet explorer window is opened and the user is free to browse and do terrible things to my server (well, not that much, but still...) I have checked the session in this case and the IE window is actually hosted within the help engine: there is no iexplore.exe process running in the user session (and it cannot: it's explicitly prohibited). We have disable all help right now until we find a solution. I'm working with the help team to have all external URLs removed from the help file but that is going to be a long and error-prone task. Meanwhile, I've checked all the group policies option but I have to say that I was unable to find anything that would prevent a standalone IE window hosted in a random process from running. I don't want to disable WinHTTP or the IE rendering engine or anything of the sort. But I need to prevent all users members of a specific AD user group from ever having an IE window displayed to them. The servers are running Windows 2003 and Citrix metaframe 4.5. Thanks in advance

    Read the article

  • Indexing text file content with command line query

    - by Drew Carlton
    I take daily notes in a plaintext file labeled with date in the YYYYMMDD format. These files are no more than 100 lines long, and are written in a blog style format. I'd like to be able search these files as if they were blog posts indexed by google, with some phrase query returning the most relevant/recent date filenames, with a snippet containing the relevant part. Ideally it would be something like this: #searchindex "laptop no sound" returns: 20100909.txt: ... laptop sound isn't working... 20100101.txt ... sound is too loud... debating what laptop to buy... and so on and so forth. I'm working on a linux platform (Debian with GNOME). I've looked at beagle and tracker, but they just seem complete overkill for what I want.

    Read the article

  • Dynamically reference a Named Table Column via cell content in Excel

    - by rcphq
    How do I reference an Excel Table column dynamically in Excel 2007? ie: i wanna reference a named column of a named table and what table it is will vary with the value of a cell. I have a Table in Excel (Let's call it Table1). I want to reference one of its columns (Let's call it column1) dynamically from a value in another cell (A1) so that I can achieve the following result: When I change A1, the formula that counts Table1[DynamicallyReferencedColumnName] gets updated to the new reference. I tried using =Count(Table1[INDIRECT("$A$1")]) but Excel says the formula contains an error. Example: A1 = names then the formula would equal Count(Table1[names]). A1 = lastname then the formula would equal Count(Table1[lastname]).

    Read the article

  • Wget site mirror, links with rel="<content>" not followed

    - by Pacifika
    Whilst creating a site mirror using wget 1.12 on Ubuntu links with a rel attribute set are not downloaded: <a href="link" rel="tag">text</a> Rel="tag" is a microformat (By adding rel="tag" to a hyperlink, a page indicates that the destination of that hyperlink is an author-designated "tag" (or keyword/subject) for the current page). My WordPress theme uses this for link to tags, so 99% of the site is ignored. Edit: it turns out all my permalinks use rel="bookmark" and are skipped as well. I'm using the following wget command (this ignores robots.txt and also follows nofollow links): wget -mkp -e robots=off http://site How do I make wget follow links with rel set?

    Read the article

  • IIS SMTP Configure Delivery Status Notification Content

    - by user37181
    Hi, how can I configure IIS SMTP sever to not attach the original mail to the Delivery Status Notification messages? The problem is that when sending newsletters with fairly large attchemnts all these attachments are again attached to the DSN messages which results in a full administrator's mailbox. Thank you

    Read the article

  • Firefox completes the address bar with content absent from my history

    - by Antoine
    I have set Firefox to complete the address bar with elements from the history only ( other options are: nothing, bookmarks, and a history+bookmarks). However, Firefox still continues to complete the address bar with elements that are no longer in my history. A search in the history returns 0 result for the incriminated string. How can I solve this without loosing my entire history? I have already tried shift+delete on the elements I would like to delete, without success. How can I find the source of a certain completion ? (like an SQL request in the sqlite3 files used to store history) I'm using Firefox 16.0.2 on OS X 10.8.2.

    Read the article

  • Using Google's App Engine as CDN for static files

    - by Saif Bechan
    I am planning on moving my static files to Google's App Engine. I was wondering if this is a good idea to do. I have read that is it possible that Google will cache your files on multiple locations, which is a good thing in my opinion. The setup should also be quite easy in eclipse with the GAE plugins. But i still have my doubts on the performance of this. Is the setup of App Engine optimized for serving static content. Now I have Nginx server my static content, will App Engine perform the same way. Are there any other ups or downs using this method?

    Read the article

  • Delete cell content in Libre (Open) Office based on the cell value

    - by take2
    I have a huge csv file (tens of thousands of rows) that I need to filter based on different criteria. After trying to find a proper CSV editor, I decided to use LibreOffice Calc. CSVed is great, but it doesn't support neither UTF-8 nor macros for advanced filtering. So, there are 4 columns, 3 of which contain numbers (with decimal numbers) and 1 of which contains text. I'm trying to find a way to delete rows with a macro code. I can achieve the desired behavior with filters too, but it's annoying to type all of the filtering values over and over again and there doesn't seem to be a way to export the filter and us it repeatedly. These rows should be deleted: The ones that don't contain certain words in textual column (column A). There are a few thousand different words used in that column and I want to keep only the rows that contain one of about 30 words in that column. Additionally, the number is the other columns should be bigger than 3.8 (column B), 4.5 (column C) and smaller than 20 (column C). The row-deletion type is "Shift up". Hopefully I have explained it well. Thanks a lot in advance for your help!

    Read the article

  • Caching static content from Adobe, Microsoft, etc

    - by Tim
    I'm currently running the Apple SUS on a Mac OS X Server in a small office environment. It works well for Apple updates, but I'm still stuck with either manually downloading and installing Adobe/Microsoft updates on each computer or running them through a Squid cache, with the blind faith that Squid will keep the files I actually want to stay cached. What is the best way to cache updates locally for applications like the Adobe Updater or Microsoft AutoUpdate? Ideally cached in such a way that I can tell which files I do or do not have cached. It would also be nice to be able to cache things for other software like Firefox and Sparkle-enabled apps, but these are usually small enough to ignore.

    Read the article

  • Screen flicker during content update, especially in Firefox

    - by Denis Malinovsky
    I'm using Nouveau video driver for my NVIDIA GeForce 6150SE nForce 430 video-card with Ubuntu 10.04. Screen flickers frequently, especially when I'm loading pages with many images/banners in Firefox. I tried to use proprietary NVIDIA driver, but it behaves itself even worse. Nv driver doesn't work at all. I also filed a bugreport in launchpad if you need any additional information.

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • How to block spam site republishing my content

    - by Fo.
    I noticed today that Google search results shows some spam copies of one of my sites. The url looks something like this: http://[subdomain].spamsite.com/www.example.com ...where example.com is my site. In my Apache access logs I'm noticing several lines like the following whenever I load the above url: 127.0.0.1 - - [219/Oct/2012:19:27:34 +0000] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The spammer's site shows an exact up to date copy of my site, so I think they are pulling in live data. Any idea how I can block this traffic?

    Read the article

  • Does Tomcat or Jetty cache dynamic content?

    - by Continuation
    I'm working on a Servlet app with contents that are updated periodically. Hence, between updates any dynamic pages generated by the Servlet can be cached. Does Tomcat or Jetty (or any Servlet container) offer a way to cache dynamically generated pages? Or would I need to use a caching reverse proxy like Squid to accomplish that?

    Read the article

  • Lots of artifacts while streaming HD content with VLC 0.9.9 on CentOS

    - by Zsub
    I'm trying to stream (multicast) a x264 encoded file using VLC. This in itself succeeds, but the stream has a huge lot of artifacts. This seems to suggest that the data cannot be transported fast enough. If I check network usage, though, it's only using about 15 mbit. I have a similar SD stream which functions perfectly. I think I could improve stream performance by not streaming the raw data, but I cannot seem to get this working. It seems that on keyframes all artifacts are removed for a short while (less than a second). This is the command I use: vlc -vv hdtest.mkv --sout '#duplicate{dst=rtp{dst=ff02::1%eth1,mux=ts,port=5678,sap,group="Testgroup",name="TeststreamHD"}}' --loop Which is all one long line.

    Read the article

  • strategy /insights for avoiding document content loss due to encryption

    - by pbernatchez
    I'm about to encourage a group of people to begin using S-Mime and GPG for digital signatures and encryption. I foresee a nightmare of encrypted documents which can no longer be recovered because of lost keys. The thorniest issue is archiving. The natural way to preserve privacy in an archive is to archive the encrypted document. But that opens us up to the risk of a lost key when time comes to unarchive a document, or a forgotten password. After all it will be a long way in the future. This would be equivalent to having destroyed the document. First thought is archiving keys with documents, but that still leaves the forgotten pass phrase. Archiving the passphrase too would be tantamount to archiving in the clear. No privacy. What approaches do you use? What insights can you offer on the issue?

    Read the article

  • Remote I/O costs with a Content Delivery Network

    - by x711Li
    As far as I know, the time complexity of scanning a directory and the amount of files in said directory are correlated due to I/O costs. Would the administrative costs of placing the files in a hashed directory tree for uploading/downloading files through a CDN API be worth it for the added efficiency? For instance, given a filename foo.mp3, the MD5 hash for this is 10ebb1120767e9de166e0f5905077cb1. Thus, storing foo.mp3 in ./10/eb/foo.mp3 would allow for less files per directory (assuming MD5 generates patterns with in Base36, this allows for 36^2 root directories with 36^2 subdirectories each and little chance of hash collision) Considering the directories themselves are not loaded, would the I/O costs of directory scanning still exist with direct uploading/downloading?

    Read the article

  • Some html5 video content will not play in Chrome - except in Private Browsing Mode

    - by oligofren
    I have had this problem for a couple of years, probably due to the fact that all my settings get transferred when I log into Chrome: Most videos on the net play fine using Chrome, but on certain site none of the videos will play. This is not due to wrong codecs or something, because opening the same video using Chrome's Private Browsing Mode will play it just fine. Since most (all?) extensions are disabled when using Private Browsing Mode, I guessed the problem had to be found in my extensions, so I disabled all and also disabled developer mode. The problem persisted ... Example html5 video from the W3C plays fine This video from Pecha Kucha does not

    Read the article

  • Sed: Deleting all content matching a pattern

    - by Svish
    I have some plist files on mac os x that I would like to shrink. They have a lot of <dict> with <key> and values. One of these keys is a thumbnail which has a <data> value with base64 encoded binary (I think). I would like to remove this key and value. I was thinking this could maybe be done by sed, but I don't really know how to use it and it seems like sed only works on a line-by-line basis? Either way I was hoping someone could help me out. In the file I would like to delete everything that matches the following pattern or something close to that: <key>Thumbnail<\/key>[^<]*<\/data> In the file it looks like this: // Other keys and values <key>Thumbnail</key> <data> TU0AKgAAOEi25Pqx3/ip2fak0vOdzPCVxu2RweuPv+mLu+mIt+aGtuaEtOSB ... dCBBcHBsZSBDb21wdXRlciwgSW5jLiwgMjAwNQAAAAA= </data> // Other keys and values Anyone know how I could do this? Also, if there are any better tools that I can use in the terminal to do this, I would like to know about that as well :)

    Read the article

  • Web Content Filtering for Windows Clients

    - by djoyce
    I'm working with a small business to solve a bunch of problems. One is their Windows 7 POS registers need to have web access restricted to only three remote support sites, but the back office machine needs an unfiltered connection. I'd like something I can install and configure on the few registers to block all but those few sites. In a perfect world this would restrict the normal register user, but the admin user would not be filtered. Free is best, if it works, but a small fee would be alright too. Microsoft's Family Safety filter is close, but requires a Windows Live account, which isn't ideal, but may be alright. Anyone use this in a small business environment? I'd prefer something easily managed at the local machines. K9 Web Protection is interesting and I'm going to look into it more. Are there other options? Seems like someone would have made something simple like this as an open source project, but maybe not.

    Read the article

  • Cannot delete folder - Content seems to be nested recursively

    - by RikuXan
    I cannot delete a folder located on my hard disk by any means. I don't quite know how it was created, all I know is, that it is a pretty deep structure of folders (too deep to delete it at once, since Windows restriction path name too long), but the problem in the end is, that I can't "pull out" the inner folders, because they don't seem to be folders anymore (Context menu lacks things like "Properties", "Cut", "Copy", "Delete" etc.) Here a picture of how a right click looks like on one of these "folders": As you can see, the current folder is in very deep, but that is not the problem, rather the one I left-clicked on. Has anyone any advice on how to get rid of these? I tried a chkdsk, said no errors. I also tried deleting those folder via a VMWare Ubuntu, to no success. I also tried a batch file from a volunteer at MS boards, that should automatically de-nest such folders, but I guess mine is a special case, since the tool only created more such folders.

    Read the article

  • How to make sure clients update their browser cache when my website is updated?

    - by user64204
    I am using the HTTP 1.1 Cache-Control header to implement client-side caching. Since I update my website only once a month I would like the CSS and JS files to be cached for 30 days with Cache-Control: max-age=2592000. The problem is that the 30-day period defined by Cache-Control doesn't coincide with the website update cycle, it starts from the moment the users visit the site and ends 30 days later, which means an update could occur in the meantime and users would be running with outdated content for a while, which could break the rendering of the website if for instance the HTML and CSS no longer match. How can I perform client-side caching of content for periods of several days but somehow get users to refresh their CSS/JS files after the website has been updated? One solution I could think of is that if website updates can be schedule, the max-age returned by the server could be decreased every day accordingly so that no matter when people visit the website, the end of caching period would coincide with the update of the website, but changing the server configuration every day goes against one of my sysadmin principles (once it's running, don't touch it).

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >