Search Results

Search found 23262 results on 931 pages for 'content analysis'.

Page 68/931 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • View unknown content-types in Firefox?

    - by dirtside
    A certain URL on my server returns Content-Type: application/json. The filename ends with .phtml, so whenever I go to it, Firefox 3.5 asks me if I want to save it or open it with another program. But the answer is neither; I want to view it in Firefox as if it was text/plain. Suggestions? The Applications tab of the Preferences window doesn't give this as an option for the "PHTML" type which now is listed in there (ever since I tried to open it with no external program).

    Read the article

  • Windows 8, Truecrypt drive mounts but Windows cannot read its content

    - by phil
    I installed Windows 8 and when I started it, it said that it was repairing a disk. The disk was an encrypted Truecrypt disk. I couldn't mount the disk in Truecrypt after that. I tried to repair the header and it worked, now I can mount the disk but neither Windows 7 or 8 can read the content. Windows asks if it should format the disk. I have all the important files on backup, but there are some media files that I would like to get back. Any ideas?

    Read the article

  • Nginx Cache-Control

    - by optixx
    Iam serving my static content with ngnix. location /static { alias /opt/static/blog/; access_log off; etags on; etag_hash on; etag_hash_method md5; expires 1d; add_header Pragma "public"; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } The resulting header looks like this: Cache-Control:public, must-revalidate, proxy-revalidate Cache-Control:max-age=86400 Connection:close Content-Encoding:gzip Content-Type:application/x-javascript; charset=utf-8 Date:Tue, 11 Sep 2012 08:39:05 GMT Etag:e2266fb151337fc1996218fafcf3bcee Expires:Wed, 12 Sep 2012 08:39:05 GMT Last-Modified:Tue, 11 Sep 2012 06:22:41 GMT Pragma:public Server:nginx/1.2.2 Transfer-Encoding:chunked Vary:Accept-Encoding Why is nginx sending 2 Cache-Control entries, could this be a problem for the clients?

    Read the article

  • How do i mitigate DDOS attacks on static servers?

    - by acidzombie24
    Here is a slightly different take on DDOS attacks. Rather than a server with dynamic content being attack i was curious how to deal with attacks on servers with STATIC CONTENT. This means cpu tends to not be an issue. Its either bandwidth or connection problems. How would i mitigate a DDOS attack knowing nothing about the attacker (for example country, ip address or anything else). I was wondering is shorting the timeout and increasing amount of connections is an acceptable solution? Or maybe that is completely useless? Also i would limit the amount of connections per IP address. Would the above help or be pointless? Keeping in mind everything is static checking for multiple request of the same page (html, css, js, etc) could be a sign of a attack. What are some measures i can take on a static content server?

    Read the article

  • Adobe Reader - Content Preparation progress

    - by kubal5003
    Hello, I was wondering recently if is it just me or anyone else noticed it: why Adobe Reader after opening every single document displays "Content preparation" window with progress bar and it lasts for ages.. ? On linux pdf readers work hell lot better (faster), on windows other readers also work faster. Some years back in the past Adobe Reader also used to be quick. What has happened? PDF files aren't bigger/more complex compared to 3-4 years ago. Computers are at least dual core and with much more ram and displaying pdf files is getting slower and slower..

    Read the article

  • Fastest web server for serving static content

    - by Swader
    Hello, I'm optimizing our system for some faster static content delivery, and was wondering if anyone has any proper experience with the fastest web servers out there for such a purpose. From the three main candidates I've considered, Nginx, Cherokee and Lighttpd, each seems to have its own problems - but the reports I've read online are somewhat biased and lean towards whichever server the user is currently using. Any ideas on where to see a proper benchmark for this specific purpose, or at least a non-biased list of pros and cons? Any personal experiences and pitfalls I should be vary of? Thanks Edit: Serverfault.com gave the answer as nginx. I'd still like to hear some developer thoughts from this end of the universe.

    Read the article

  • How to disable Spotlight content indexing in Mac OS

    - by o.v.
    From Windows experience, I could always elect Live search to only index file names not their content. Is this something that can be done with Spotlight on a Mac? It used to index absolutely everything, for instance it would return a bunch of video files for any obscure character combination typed into the search field. Right now I've disabled Spotlight entirely as per this answer, but it seems to have disabled searching altogether. For instance, Finder is yet to locate any .pdf files in a small directory as I'm typing this question (unlike windows search which would still be able to work even with indexing disabled) Alternatively, if there is any way (including a trusted third-party app) that will index file names and metadata e.g. ID3 tags that would likely be the preferred option.

    Read the article

  • Can Squid 2.7 proxy gzipped content

    - by Tom Styles
    We have a forward proxy for our network which is Squid 2.7. This is managed for us by a third party. We noticed recently that http requests going from our network to the web were having the Accept-Encoding header removed. This was resulting in all web traffic across our network (approx 8000+ PCs) being uncompressed even though the browsers and server on each end were capable. We have asked the third party to look into this and they have said it is because Squid 2.7 does not support compression. I understand this to be true but I was under the impression that the compression happened on the webserver rather than the proxy. So... Can Squid 2.7 proxy and/or cache content that is gzipped? If it can, how/why might it be configured such that the Accept-Encoding header is being removed?

    Read the article

  • nginx not returning 304 on cached content

    - by Don H
    I'm using nginx as a reverse proxy with an Apache back-end handling some PHP files. The files return the right expiry headers and proxy_cache does a good job of caching them, but I've noticed that the cached content returns a 200 on every refresh, when it might be more efficient to return a 304 on the cached files. The files in question are generated by PHP. The urls do not have .php in them as they've been prettified. Any idea why nginx might not be returning 304 on repeated visits to a cached PHP output? To clarify: It's using proxy_cache for caching dynamic PHP pages (not static html pages generated by PHP). I'm setting expires headers in the PHP file of time + 24 hours. With that in mind, I was hoping nginx would be able to then return 304s on its cached versions during that 24 hour window.

    Read the article

  • PDF - re/generate image using stream content

    - by tom_tap
    I have pdf file with 8 content streams (bytes) which behave like image layers (but they are not layers that I can turn off/on in Adobe Reader). I would like to extract these images separately, because they overlap each other (thus I am not able to "Take a Snapshot" or "Copy File to Clipboard"). So now I have these streams in below format: <Start Stream> q 599.7601 0 0 71.99921 5951.03423 4282.48177 cm /Im0 Do Q q 599.7601 0 0 71.99921 5951.03432 4210.48177 cm /Im1 Do Q q 599.7601 0 0 71.99921 5951.03441 4138.48177 cm /Im2 Do [...] My question is: how to use these data to generate or regenerate these images to be able to save it as raster or vector file? I have already tried pstoedit, but it doesn't work properly beacuse of these multi streams. Same with PDFedit.

    Read the article

  • Rearrange content of a file

    - by VikJES
    I'd like to rearrange the content of a file on a per line basis (see below), ideally without using Perl or Python (I'm not allowed to... Don't ask.) The input file contains unordered header lines and lines with backup operation results. The output files should contain the lines ordered as shown below. Original file: Completed Backups Backups with Warnings Failed Backups Server A backup was completed with warnings Server B backup was successful Server C backup failed Server D backup was completed with warnings End result: Completed Backups Server B backup was successful Backups with Warnings Server A backup was completed with warnings Server D backup was completed with warnings Failed Backups Server C backup failed

    Read the article

  • How to see the content-type of a response in Nginx log file

    - by MLister
    Is it possible to see the content-type of a response to a request in Nginx's log file? At the moment, this is what I see for the request in question: 127.0.0.1 - - [23/Nov/2012:10:17:19 -0500] "GET /fonts/cantarell-bold-webfont.eot? HTTP/1.1" 200 22679 "https://www.mysite.com/blah/doc" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Trident/4.0; .NET CLR 1.1.4322; .NET4.0C)" If this is not feasible using Nginx's log, what are the other options? Thanks.

    Read the article

  • equivalent to CDN but for dynamical content?

    - by ajsie
    so i know for serving static elements (css, js, images, videos etc) you should use CDN since they are spread out throughout the world. but how could i spread out by apache servers? is there an equivalent to CDN but for dynamical pages? or is it the traditional LAMP way. if so, i guess my best option is to find an international hosting provider that hosts in different countries, so the content will be served from the country located nearest the client machine. any suggestions of such hosting providers? or is it best practice to contact different hosting providers in different countries that do not relate to each other. what is the right way to go?

    Read the article

  • OPN Oracle ECM 10g R3 Implementation Boot Camp - (12-14/Abr/10)

    - by Claudia Costa
    É com entusiasmo que lhe anunciamos o bootcamp de Oracle ECM 10g R3 Implementation que irá realizar nos dias 12-14 de Abril  que abordará os tópicos abaixo descritos. Com o objectivo de ajudar os parceiros a desenvolver competências, a Oracle University e a Oracle Alliances&Channel, desenharam este bootcamp, compactando os conteúdos e reduzindo assim os custos. Preço por participante (3 dias) - 1.250 Eur + Iva  Oracle offers the most unified, usable enterprise content management platform in today's market. With centralized control across single or multiple repositories, common core functionality, and easily scalable content management capabilities, Oracle provides content management solutions for many content types and users-wherever they work in the enterprise.   The Oracle Enterprise Content Management (ECM) Implementation Boot Camp examines the fundamental concepts, techniques, and architecture of Oracle's ECM technologies. Join this training to learn how you can manage and maintain unstructured content   Target Audience:  The Oracle ECM Implementation Boot Camp is designed for architects, technical consultants, team/project leaders and functional consultants of our system integrator partners who want to ramp-up on ECM technology.   Contents:  The ECM Implementation Boot Camp is a three-day hands-on workshop, designed for Oracle Partners who are new to ECM, and will provide implementation instruction on the ECM technology offered by Oracle. The boot camp will: • Provide hands-on experience in implementing Oracle's truly unified, open and standard base ECM technology • Provide the strategic direction about Oracle's Fusion Middleware/Enterprise 2.0 and its role in composite application development • Expose broad set of Oracle's ECM technologies.   Objectives: The Oracle ECM Implementation Boot Camp is primarily focused on the Oracle's ECM offering to manage and maintain unstructured content and covers Universal Content Management (UCM), Image and Process Management (IPM), Universal Records Management (URM), and Information Rights Management (IRM):   Topics Covered • Introduction to Oracle UCM o UCM Overview o UCM Architecture Overview • Content Server and Document Management basics o Installation and Administration Skills § User and Security Admin § Configuration (metadata, DCLs, profiles, rules, etc.) § Workflow Admin § System Properties and Component Manager § Managing Subscriptions o Contributing Content § Browser form § WebDAV folder § Desktop Integration o Searching • Web Content Management o Site Studio • Universal Records Management • Information Right Management (IRM) • Image & Process Management (IPM) • Oracle Document Capture • Oracle eMail Archive Service. Labs • Content Server Installation • Use and Administration of Content Server • Introduction to Site Studio • Use and Administration of Records Manager Demo: The R&D Group and the New Patent Focus: Information Rights Management, Knowledge Management, Accounts Payable Image Automation, Imaging and Process Management Case Study Use Case 1: Enable City of Xalco to streamline internal processes by empowering city employees to quickly and efficiently manage and publish information on their employee intranet and eventually public Web site. Use Case 2: Help Acme & Co in archiving its goal is to become "paperless" by managing all of their company's business content in a central, Web-based repository. Acme's business content ranges from policies and procedures to Employee listings and marketing materials.   Agenda: Day 1 ·         ECM Overview & Content Server ·         ECM Overview ·         ECM Architecture and Installation ·         UCM and Digital Asset Management DEMO ·         Lab 1 - Content Server Installation ·         Lab 2 - Use and Administration of Content Server   Day 2 ·         Web Content Management ·         Lab 2 - Use and Administration of Content ·         Server (continued) ·         Introduction to Web Content Management ·         Lab 3 - Site Studio   Day 3 ·         URM/IRM/IPM ·         Introduction to Universal Records Management ·         Lab 4 - URM ·         Introduction to Information Rights Management ·         Information Rights Management DEMO ·         Introduction to Image and Process Management ·         Image and Process Management Demo ·         Oracle Document Capture ·         Oracle eMail Archive   Material needed for Bootcamp: This Boot camp requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware • RAM: 2GB RM minimum (1 GB RAM is not enough) • HDD: 15 GB free HDD space   Pre requistes: To ensure a valuable learning experience, participation in this boot camp requires completing the prerequisite courses and successfully passing the prerequisite assessment test that is mapped into the Oracle Enterprise Content Management Implementation Boot Camp guided learning path. At a minimum, participants with equivalent skills and background should review the guided learning path and successfully pass the prerequisite assessment test to ensure they possess the background necessary to benefit from participation in the boot Camp.   ---------------------------------------------------------------------   Para mais informações/inscrições, contacte: Mónica Pires  21 423 51 44 Horário e Local 9:30h - 12:30h e 14:00h - 17:00 ( 6 horas/dia )Oracle, Porto Salvo - Oeiras.

    Read the article

  • How do I go about link web content in a database with a nested set model?

    - by wb
    My nested set table is as follows. create table depts ( id int identity(0, 1) primary key , lft int , rgt int , name nvarchar(60) , abbrv nvarchar(20) ); Test departments. insert into depts (lft, rgt, name, abbrv) values (1, 14, 'root', 'r'); insert into depts (lft, rgt, name, abbrv) values (2, 3, 'department 1', 'd1'); insert into depts (lft, rgt, name, abbrv) values (4, 5, 'department 2', 'd2'); insert into depts (lft, rgt, name, abbrv) values (6, 13, 'department 3', 'd3'); insert into depts (lft, rgt, name, abbrv) values (7, 8, 'sub department 3.1', 'd3.1'); insert into depts (lft, rgt, name, abbrv) values (9, 12, 'sub department 3.2', 'd3.2'); insert into depts (lft, rgt, name, abbrv) values (10, 11, 'sub sub department 3.2.1', 'd3.2.1'); My web content table is as follows. create table content ( id int identity(0, 1) , dept_id int , page_name nvarchar(60) , content ntext ); Test content. insert into content (dept_id, page_name, content) values (3, 'index', '<h2>welcome to department 3!</h2>'); insert into content (dept_id, page_name, content) values (4, 'index', '<h2>welcome to department 3.1!</h2>'); insert into content (dept_id, page_name, content) values (6, 'index', '<h2>welcome to department 3.2.1!</h2>'); insert into content (dept_id, page_name, content) values (2, 'what-doing', '<h2>what is department 2 doing?/h2>'); I'm trying to query the correct page content (from the content table) based on the url given. I can easily accomplish this task with a root department. However, querying a department with multiple depths is proving to be a little harder. For example: http://localhost/departments.asp?d3/ (Should return <h2>welcome to department 3!</h2>) http://localhost/departments.asp?d2/what-doing (Should return <h2>what is department 2 doing?</h2>) I'm not sure if this can be create in one query or if there will need to be a recursive function of some sort. Also, if there is nothing after the last / then assume we want the index page. How can this be accomplished? Thank you.

    Read the article

  • Is there a good [and modern] reason to not have static HTML pages with AJAX content , rather than generate pages?

    - by user1725
    Assumptions: We don't care about IE6, and Noscript users. Lets pretend we have the following design concept: All your pages are HTML/CSS that create the ascetics, layout, colours, general design related things. Lets pretend this basic code below is that: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <link href="/example.css" rel="stylesheet" type="text/css"/> <script src="example.js" type="text/javascript"></script> <head> <body> <div class="left"> </div> <div class="mid"> </div> <div class="right"> </div> </body> </html> Which in theory should produce, with the right CSS, three vertical columns on the web page. Now, here's the root of the question, what are the serious advantages and/or disadvantages of loading the content of these columns (lets assume they are all indeed dynamic content, not static) via AJAX requests, or have the content pre-set with a scripting language? So for instance, we would have, in the AJAX example, lets asume jquery is used on-load: //Multiple http requests $("body > div.left").load("./script.php?content=news"); $("body > div.right").load("./script.php?content=blogs"); $("body > div.mid").load("./script.php?content=links"); OR--- //Single http request $.ajax({ url: './script.php?content=news|blogs|links', method: 'json', type: 'text', success: function (data) { $("body > div.left").html(data.news); $("body > div.right").html(data.blogs); $("body > div.mid").html(data.links); } }) Verses doing this: <body> <div class="left"> <?php echo function_returning_news(); ?> </div> <div class="mid"> <?php echo function_returning_blogs(); ?> </div> <div class="right"> <?php echo function_returning_links(); ?> </div> </body> I'm personally thinking right now that doing static HTML pages is a better method, my reasoning is: I've separated my data, logic, and presentation (ie, "MVC") code. I can make changes to one without others. Browser caches mean I'm just getting server load mostly for the content, not the presentation wrapped around it. I could turn my "script.php" into a more robust API for the website. But I'm not certain or clear that these are legitimately good reasons, and I'm not confidently aware of other issues that could happen, so I would like to know the pros-and-cons, so to speak.

    Read the article

  • Alert and if changes behaviour dom copying

    - by lowkey
    Hi guys! Tried to make a little old school ajax (iframe-javascript) script. A bit of mootools is used for DOM navigation Description: HTML: 1 iframe called "buffer" 1 div called "display" JAVASCRIPT: (short: copy iframe content into a div on the page) 1) onLoad on iframe triggers handler(), a counter makes sure it only run once (this is because otherwise firefox will go into feedback loop. What i think happens: iframe on load handler() copyBuffer change src of iframe , and firefox takes that as an onload again) 2) copybuffer() is called, it sets src of iframe then copies iframe content into div, then erases iframe content THE CODE: count=0; function handler(){ if (count==0){ copyBuffer(); count++; } } function copyBuffer(){ $('buffer').set('src','http://www.somelink.com/'); if (window.frames['buffer'] && $('display') ) { $('display').innerHTML = window.frames['buffer'].document.body.innerHTML; window.frames['buffer'].document.body.innerHTML=""; } } problems: QUESTION 1) nothing happens, the content is not loaded into the div. But if i either: A) remove the counter and let the script run in a feedback loop: the content is suddenly copied into the div, but off course there is a feedback loop going on, you can see it keeps loading in the status bar. OR B) insert an alert in the copyBuffer function. The content is copied, without the feedback loop. why does this happen? QUESTION 2) The If wrapped around the copying code i got from a source on the internet. I am not sure what it does? If i remove it the code does not work in: Safari and Chrome. Many thanks in advance!

    Read the article

  • How do I show a log analysis in Splunk?

    - by Vinod K
    I have made my ubuntu server a centralized log server...I have splunk installed in the /opt directory of the ubuntu server. I have one of the another machines sending logs to this ubuntu server..In the splunk interface i have added in the network ports as UDP port 514...and also have added in the "file and directory" /var/log. The client has also been configured properly...How do I show analysis of the logs??

    Read the article

  • How can I perform sentiment analysis on extracted text from online sources?

    - by aniket69
    I'm working on extracting the sentiment from YouTube comments, blogs, news content, Facebook wall posts, and Twitter feeds. I'm looking for an automated way to do this: the two third-party solutions I've found have been AlchemyAPI and RapidMiner. Are these the best way to approach this project, or should I be using something else? Is there a more efficient way to approach sentiment analysis? What techniques have worked for you in a project like this?

    Read the article

  • Are there software options (preferabbly .NET) for doing distance and speed analysis of footballers moving on video?

    - by Anonymous Type
    Editing Question for Clarity Thanks for feedback so far, very insightful. I'm not sure how far along this part of the software community is, and what if any libraries exist for me to leverage from. Heres what I'm trying to do. Problem: Take an existing video of a game of rugby league. The Rugby League field is 100 metres long, 70 metres wide, and has white line markings every 10 metres running along the width of the field, as well as along the sidelines. Each side has 13 players on the field. Players on each team have identical jerseys that normally constrast strongly against background colours (green/brown field colour) and the referee's colour (usually yellow) and the designated water runner (orange). All players have a unique number in thick white lettering on their backs for identification. Video is taken with a high definition camera. Currently only one camera is used (2D) and existing video does not contain a foreground object of fixed spatial dimensions (as suggested in one answer for comparision measurements, however I could add this to future filming sessions if it is worthwhile). The player's do not run in a straight line 50% of the time but will go sideways on on a diagonal to the play the ball. The distance measured always starts from the spot of the previous "tackle", which ends where the player stops forward movement. It is not always possible to determine the players number from the video (facing other direction, sunlight, others standing in the way of the camera). But this isn't important as the software could allow for manual inputting of unknown "runs" at a later point after analysis. Determine the distance between two points (i.e. where the player started his "run" and where he finished it). I'm guessing that this would be quite doable if I manually marked the start and end point in the video. But how would I use landmarks in the background to determine the distance (assuming the person taking the video has kept it from jerking around). Question: Do software packages or libraries exist that are specialised enough to assist with writing analysis software to determine a sports persons distance travelled based on video taken of the performance?

    Read the article

  • The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000.

    - by lankylad
    I have a SQL Server 2008 Analysis Services Project. In the Data Source View I have a Named Query which references a single Data Source containing three tables. The Project processes successfully and the cube can be browsed. I recently added a second Data Source to the Data Source View and linked a table to the original Named Query. When I try to process the project, I get the message: OLE DB error: OLE DB or ODBC error: The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000. The Connection String for both Data Sources uses SQLNCLI10.1

    Read the article

  • The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000

    - by lankylad
    I have a SQL Server 2008 Analysis Services Project. In the Data Source View I have a Named Query which references a single Data Source containing three tables. The Project processes successfully and the cube can be browsed. I recently added a second Data Source to the Data Source View and linked a table to the original Named Query. When I try to process the project, I get the message: OLE DB error: OLE DB or ODBC error: The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000. The Connection String for both Data Sources uses SQLNCLI10.1

    Read the article

  • IIS serving static content gives 503 at random

    - by Steffen
    We're having a few issues with our image server. It's a Win 2008 running IIS 7.5 and it only serves static content: images. It has run without issues for quite a while, until recently when we disabled Output Caching, as we noticed having it enabled meant it sent no-cache host-headers to the clients (forcing them to fetch the images from the server every time) We've read quite a bit about it, and it seems IIS just works that way - either you use Output Caching or you get to use cache host-headers. Anyway having disabled the Output Cache, we now experience random 5 minutes intervals, where all requests just get a 503 Service Unavailable. During this period the "Files cached" performance counter staggers (neither increased nor decreased) and after the period all caches are flushed. You might find it weird I talk about caching, since we disabled Output Caching. The thing is we changed the ObjectTTL parameter in registry, so we cache files for 3 minutes (which has worked very well, our Disk I/O dropped significantly) So even with Output Caching disabled, we're still caching plenty of files - if we could just get rid of the random 503 it'd be perfect :-D We don't get any messages in the Windows event log during these 503 intervals, so we're pretty stumped as to what to do. Any ideas are very welcome :-)

    Read the article

  • IIS running but not serving content

    - by Kyle
    I have an internal dev server running Windows 2k8 R2 with the Web and FTP Server roles set up which won't serve any content at all. Trying to connect from another host via telnet yields 'connection failed': c:\>telnet devserver 80 Connecting To devserver...Could not open connection to the host, on port 80: Conn ect failed Using netstat -an | find "80" on the dev server returns no connections on port 80 (a few on 1801, etc) tcpview confirms this, listing no open connections on port 80. The following services related to the Web role are running: World Wide Web Publishing Service Application Host Helper Service Microsoft FTP Service (ftp connections to port 21 are granted) Windows Process Activation Service The default website bindings are: Type Host Name Port IP Address Binding Information http 80 * net.tcp 808:* net.pipe * net.msmq localhost msmq.formatname localhost When setting up a new application under the default site, the test function passes both connection/authorisation only if the 'connect as' user is local admin, otherwise the test errors with 'invalid application path'. At no point is the W3SVC service PID bound to port 80 (it is running and bound to 21 for ftp). There are no W3SVC log directory at c:\inetpub\logs\LogFiles\ (only FTPSVC2), and no HTTPERR directory at c:\windows\system32\ or c:\windows\system32\logfiles\. There do not appear to be any related errors in the event logs. I'd really appreciate any thoughts on be a good place dig into what's (not) going on here!

    Read the article

  • Truncated content with Apache on Vagrant VM

    - by Nev Stokes
    I'm using Vagrant to run a CentOS VM in order to try and achieve local development parity with our live servers. I've symlinked /var/www/html with the /vagrant shared directory and am forwarding port 80 for viewing at http://localhost:4567. I'm developing using SublimeText 2 on OS X Mountain Lion. Once I figured that iptables was tripping me up, all was well and good. Until I noticed something strange. I have a sample HTML page consisting of several paragraphs of lorem copy. I can view this fine in a browser on OS X. But when I make an edit, for example removing a paragraph, and refresh the content is truncated with the paragraph I deleted still visible. When I cat the files on the server I can see the changes I made but these aren't even reflected when I curl localhost. I strongly suspect that it's a problem with my Apache settings — with which I didn't really tinker — as the issue doesn't arise when I stop Apache and run sudo python -m SimpleHTTPServer 80 in the directory to view pages instead. What gives?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >