Search Results

Search found 18961 results on 759 pages for 'far se'.

Page 307/759 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • WSUS setup on ADS Domain Controller VM

    - by NickC
    How should I setup sharing/directory permissions to allow the following to work: VMHost - Hyper-V partition (not part of domain) ADServer - Active Directory Domain Controller running as guest VM on VMHost \VMHost\Updates disk share for WSUS to put the updates and other local files Problem is what permissions do I need to give to \VMHost\Updates to allow WSUS to work with this directory to avoid "The WSUS content directory is not accessible" error which I am currently seeing. As far as I can tell WSUS runs as "domain/Network Service" account. Question is without adding VMHost to the domain how do I give that user appropriate permissions to this directory? Is there a way that VMHost can be told to trust ADServer and then be able to use users accounts from there? Sort of relates to my other post here: Win 2012 Domain controller VM, should the Hyper-V host be part of the domain? Thanks, Nick

    Read the article

  • MacBook Pro screen goes dark

    - by Mike M
    I've had my MacBook Pro for two years now; no problems so far (it has had 3rd party RAM from the get go). Today, I'm copying a particularly large VM from an External disk drive to local MacBook disk. It has about 3GB to go and I take off to do some other things and when I come back my screen is "dark". The computer is still on but I can't see anything. I forced a reboot by holding down the power button, it starts up with the "chimes", but still no screen. I've done this several times. Any ideas? Do you think the hard disk activity caused it to get too hot?

    Read the article

  • How Google Web Starter Kit serves adaptive image for mobile?

    - by 5argon
    My website weirdly (in a good way) serves smaller images when viewed on mobile. I wanted to know what cause this? As far as I know this is not the default behaviour, so I think it must be Google Web Starter Kit's doing.Here is the debug information when debugging on device. All images became 231 B size no matter how large it actually is. (On desktop debugging the size varies.) I tried using Google Web Starter Kit (https://github.com/google/web-starter-kit) recently. The tools in it are made of Ruby, Node.js, SASS and Gulp to help you 'build' website. Pre-build you can enjoy automatic reload because the Gulp script will watch all files for you. When build it will run various tools to minify HTML,CSS and compress images. According to this page https://developers.google.com/web/fundamentals/tools/build/build_site the gulp-imagemin was used. So I guess the imagemin is doing the mobile optimization for me? What kind of compression can serve automatically resized image on mobile? And why is the size 231 B? Is this related to my screen size?

    Read the article

  • Command line audio library manager for Linux

    - by Ketil
    Hi all Hear is my set-up, I have a Linux server that is running Music Player Demon, all the audio files are under a dir (/muzik) which is exported by NFS. So to add files to the MPD database, I just drop the files into the /muzik NFS share and up date the MPD db, so far so good, but I would like to keep the dir strucher belowe /muzik in sum sort of order. To achieve this I am using Amarok, wich a start on my laptop and then use the organise files command to sort the files in into a sensible dir strucher based on the tags in the files. Do you know of any command line utility that can do the same thing that I am using Amarok for so I can run it from cron on the server and automate the process? I hope that this make sense.

    Read the article

  • New Dell Vostro 3550 will not boot into Live CD

    - by rich97
    I've been trying to install Ubuntu and/or it's derivatives on my new Dell Vostro 3550 but I'm finding it impossible to boot into the live CD environment. Here are the things I've already tried: Booting from multiple versions of multiple distributions (Ubuntu 10.10, Ubuntu 11.04, Mint 11, Mint 10) Booting from a Live USB created with unetbootin, the Linux Mint startup disk creator, the universal USB creator and dd of=linuxmint.***.iso if if=/dev/sdx. Burning a CD and booting from the internal CD drive. In the case of the USB keys with the latest versions of Ubuntu it gets to the page where I can select a boot option, but after selecting an option the screen goes black, even the back light turns off. With the older distributions the screen stays on but it starts loading and then just hangs. The CDs don't even try to boot. It starts spinning but then falls back to the default Windows install. The only way I've got it to work so far is with Wubi, but that's hardly ideal. I'd like to have two separate physical partitions with a /home and /. Any help is appreciated. Thank you.

    Read the article

  • How to scroll up and look at data in GNU screen

    - by dorelal
    I am using a mac (snow leopard). I am a ruby on rails developer and I watched a screencast on GNU screen and am trying it out. So far I like it. On a window when I start server I get to see the log messages. However I can't seem to scroll up. I do get a scroll bar. However when I use the scroll bar and scroll up I don't see anything. How do people use GNU screen and scroll up?

    Read the article

  • Doing a passable 4X game AI

    - by Extrakun
    I am coding a rather "simple" 4X game (if a 4X game can be simple). It's indie in scope, and I am wondering if there's anyway to come up with a passable AI without having me spending months coding on it. The game has three major decision making portions; spending of production points, spending of movement points and spending of tech points (basically there are 3 different 'currency', currency unspent at end of turn is not saved) Spend Production Points Upgrade a planet (increase its tech and production) Build ships (3 types) Move ships from planets to planets (costing Movement Points) Move to attack Move to fortify Research Tech (can partially research a tech i.e, as in Master of Orion) The plan for me right now is a brute force approach. There are basically 4 broad options for the player - Upgrade planet(s) to its his production and tech output Conquer as many planets as possible Secure as many planets as possible Get to a certain tech as soon as possible For each decision, I will iterate through the possible options and come up with a score; and then the AI will choose the decision with the highest score. Right now I have no idea how to 'mix decisions'. That is, for example, the AI wishes to upgrade and conquer planets at the same time. I suppose I can have another logic which do a brute force optimization on a combination of those 4 decisions.... At least, that's my plan if I can't think of anything better. Is there any faster way to make a passable AI? I don't need a very good one, to rival Deep Blue or such, just something that has the illusion of intelligence. This is my first time doing an AI on this scale, so I dare not try something too grand too. So far I have experiences with FSM, DFS, BFS and A*

    Read the article

  • Search ranking for important keywords has gone down drastically [duplicate]

    - by Vaivhav
    This question already has an answer here: How to diagnose a search engine ranking drop? 5 answers Firstly, we are a small entrepreneurial team of 3 persons and I am more like an amateur webmaster of the company's website as we cannot really afford a technical guy/department right now. A few weeks earlier, our website traffic and rankings for most keywords decreased overnight. I did a lot of reading henceforth and learned about Penguin 2.1 which people said is the reason for the drop. Something like this had never happened before. Now, I have gone through the entire Google webmaster help section. It says there that if a manual penalty is taken against us, we would notice a message in Manual Actions page. So far, we haven't received any notice from Google for web spam. Some SEO guys I contacted said they found spam links in our backlink profile. I do believe I had mistakenly purchased a cheap link/SEO scheme when I was yet very new to SEO. This was more than a year back but since then we have been legitimate. Moreover, how do I find out which is a spam link and which is not? Our content is all original, refreshing and the best you will find in our niche. We also have a blog but on a different domain (wordpress.com) from where we send out anchored links to our business website. Is this a good thing to do? Now, how should we proceed and recover our traffic/rankings. I tried searching in webmasters for a way to reach google and ask them why the traffic has decreased suddenly, but I couldn't find a contact form or something. Can someone please go through our website and help in making things more clear regarding the reason for the drop, along with a solution. Will really appreciate this as I can't get to figure this out and its taking a lot of time. Vaivhav

    Read the article

  • How would you TDD the functionality of getting the corresponding process of a running windows service?

    - by Matt Spinelli
    Purpose Over the last year or more I've been learning unit testing via books I've read recently like The Art of Unit Testing, Working Effectively with Legacy Code, and others. I've also been using unit tests, mocking frameworks, and the like, periodically at work and definitely see the value. However, I'm still having a hard time wrapping my mind around TDD (as opposed to TAD) when the situation calls for code that is gong to mostly use external API calls. Problem to solve Get the process associated with a windows service using the service name. example: Function GetProcess(ByVal serviceName As String) As Process Rules Show each major iteration in production & test code using TDD No need to see any other code or configuration that is required to get things to run. Just curious about the interfaces, concrete classes, and test methods. C# or VB.NET Must use the .Net framework regarding services/processes (i.e. System.Diagnostics.Process) Test Frameworks: Nunit or MSTest Isolation Frameworks: Moq, Rhino Mock, or Microsoft Moles Must write true unit tests (no integration tests) Additional notes As far as I can tell there are two approaches design wise. Use an Inversion of Control approach along with using the Adapter and/or Facade patterns to wrap the underlying .net framework objects dealing with processes and services. Keep the .net framework code in the class containing the Get Process method and use code detouring (interception) via Microsoft Moles to isolate the hard dependencies from the method under test.

    Read the article

  • Why do I always get this error when using 'apt-get' commands?

    - by Venki
    I am using Ubuntu 14.04(with Unity). Just today(as of the date of this post) I did a sudo apt-get update && sudo apt-get upgrade and at the end of the 'Upgrade' process I got the following error :- Setting up crossplatformui (1.0.38) ... * Stopping ACPI services... [ OK ] * Starting ACPI services... [ OK ] package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.13.0-27-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory `/usr/src/linux-headers-3.13.0-27-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory #include <linux/smp_lock.h> ^ compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.13.0-27-generic' make: *** [modules] Error 2 dpkg: error processing package crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) From then on whatever apt-get command I use(so far as I know, except apt-get update) I keep getting the above error at the end of the process. But whichever apt-get command I use does what it has to without fail.(For example I tried installing blender with sudo apt-get install blender and it installed fine though it showed the above error.) After this I even got a kernel update(from 3.13.0-27 to 3.13.0-29 via the Software Updater), but even now the issue persists. How do I solve this issue?

    Read the article

  • MVC + 3 tier; where ViewModels come into play?

    - by mikhairu
    I'm designing a 3-tiered application using ASP.NET MVC 4. I used the following resources as a reference. CodeProject: MVC + N-tier + Entity Framework Separating data access in ASP.NET MVC I have the following desingn so far. Presentation Layer (PL) (main MVC project, where M of MVC was moved to Data Access Layer): MyProjectName.Main Views/ Controllers/ ... Business Logic Layer (BLL): MyProjectName.BLL ViewModels/ ProjectServices/ ... Data Access Layer (DAL): MyProjectName.DAL Models/ Repositories.EF/ Repositories.Dapper/ ... Now, PL references BLL and BLL references DAL. This way lower layer does not depend on the one above it. In this design PL invokes a service of the BLL. PL can pass a View Model to BLL and BLL can pass a View Model back to PL. Also, BLL invokes DAL layer and DAL layer can return a Model back to BLL. BLL can in turn build a View Model and return it to PL. Up to now this pattern was working for me. However, I've ran into a problem where some of my ViewModels require joins on several entities. In the plain MVC approach, in the controller I used a LINQ query to do joins and then select new MyViewModel(){ ... }. But now, in the DAL I do not have access to where ViewModels are defined (in the BLL). This means I cannot do joins in DAL and return it to BLL. It seems I have to do separate queries in DAL (instead of joins in one query) and BLL would then use the result of these to build a ViewModel. This is very inconvenient, but I don't think I should be exposing DAL to ViewModels. Any ideas how I can solve this dilemma? Thanks.

    Read the article

  • Can a truecrypt container be moved on Fedora Linux?

    - by Thayananthan Narayanan
    I have created a truecrypt container (100GB) to store very important files with vital information hidden on my hardrive running Fedora Linux OS. Now, I want to move the container to portable hardrive. From what I read so far, you should be able to do it easily like any other file. However, I am running into problem. Fedora won't let me. It keeps giving me "Permission denied" error. I think I need to login as a root or superuser, but how do you do that on Fedora. What am I doing wrong?

    Read the article

  • Spawned Process Terminated in GP Startup Script

    - by Charles Gargent
    I have a Group Policy Startup Script which runs synchronously. I now need this script to run one process asynchronously. So far I have managed to get the spawned process running via the command below, however once the rest of the script finishes and the GP Startup Script "phase" finishes and the logon prompt is shown, my spawned process is terminated. Is there any way to have this process continue beyond the Startup Script phase? cmd /c start spawned.bat I guess the reason why it terminates is because the process was launched by the Startup Script process and when the parent process terminates so do its children. PS I need it to be launched via the exisitng script.

    Read the article

  • Priority Manager&ndash;Part 1- Laying out the plan

    - by Patrick Liekhus
    Now that we have shown the EDMX with XPO/XAF and how use SpecFlow and BDD to run EasyTest scripts, let’s put it all together and show the evolution of a project using all the tools combined. I have a simple project that I use to track my priorities throughout the day.  It uses some of Stephen Covey’s principles from The 7 Habits of Highly Effective People.  The idea is to write down all your priorities the night before and rank them.  This way when you get started tomorrow you will have your list of priorities.  Now it’s not that new things won’t appear tomorrow and reprioritize your list, but at least now you can track them.  My idea is to create a project that will allow you manage your list from your desktop, a web browser or your mobile device.  This way your list is never too far away.  I will layout the data model and the additional concepts as time progresses. My goal is to show the power of all of these tools combined and I thought the best way would be to build a project in sequence.  I have had this idea for quite some time so let’s get it completed with the outline below. Here is the outline of the series of post in the near future: Part 2 – Modeling the Business Objects Part 3 – Changing XAF Default Properties Part 4 – Advanced Settings within Liekhus EDMX/XAF Tool Part 5 – Custom Business Rules Part 6 – Unit Testing Our Implementation Part 7 – Behavior Driven Development (BDD) and SpecFlow Tests Part 8 – Using the Windows Application Part 9 – Using the Web Application Part 10 – Exposing OData from our Project Part 11 – Consuming OData with Excel PowerPivot Part 12 – Consuming OData with iOS Part 13 – Consuming OData with Android Part 14 – What’s Next I hope this helps outline what to expect.  I anticipate that I will have additional topics mixed in there but I plan on getting this outline completed within the next several weeks.  Thanks

    Read the article

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • QA - Developer communication

    - by exiter2000
    I am a developer and have worked at this company 4~5 years by now. We have been practicing scrum for about 2 years. I think, I have been worked well with QAs. I believe QAs/developers/technical writers are all one team. We are also actively hiring new team members. As a legacy member of the team, I have faced to assist new member(including developers and testers) with my business knowledge. We work on 2 weeks base scrum. I usually deliver my user story completely by the first date of second week and do some qa build with partial functionality of my user story so that QA has a good idea about my implementation and flow. Recently, I have met some QAs. In first week, the QAs do not talk... In stand up meeting, they say they are developing test cases regardless I deliver the user story or not. In second week, I do not have a single defect till Thursday afternoon and suddenly I have a major defect with several minor UI defect, which I delivered one week ago. Or I have one or two minor defects on second week however major defects on Thursday afternoon or Friday morning. This eventually make the story rolls over to the next sprint. Major defect takes time to fix and more importantly it would trigger the regression test for the story... Even if I worked Thursday evening and fixed it, the testing will not finish. And this happens multiple times with certain QAs. As a same team member, I talked to the QAs if they could test major defect with higher priority... Rejected... Because I do not understand QA process.. So I asked roughly how many major test cases are covered so far in the stand up meeting on 2nd week Wednesday.. The response is I should not ask this to the QA in the stand up meeting... What do I do?

    Read the article

  • How to troubleshoot a remote wmi query/access failure?

    - by Roman
    I'm using Powershell to query a remote computer in a domain for a wmi object, eg: "gwmi -computer test -class win32_bios". I get this error message: Value does not fall within the expected range Executing the query local under the same user works fine. It seems to happen on both windows 2003 and also 2008 systems. The user that runs the shell has admin rights on the local and remote server. I checked wmi and dcom permissions as far as I know how to do this, they seem to be the same on a server where it works, and another where it does not. I think it is not a network issue, all ports are open that are needed, and it also happens within the same subnet. When sniffing the traffic we see the following errors: RPC: c/o Alter Cont Resp: Call=0x2 Assoc Grp=0x4E4E Xmit=0x16D0 Recv=0x16D0 Warning: GssAPIMechanism is not found, either caused by not reassembled, conversation off or filtering. And an errormessage from Kerberos: Kerberos: KRB_ERROR - KDC_ERR_BADOPTION (13) The option code in the packet is 0x40830000 Any idea what I should look into?

    Read the article

  • NFS mount mounted inside another NFS mount disappears randomly

    - by espenfjo
    I have quite an odd issue where my nested NFS mounts just disappear randomly from time to time. The fstab entries look somewhat like this: nfs:/home /home/nfs rw,hard,intr,rsize=32768,noatime,nocto,proto=tcp 0 0 nfs:/bigdir /home/bigdir nfs rw,hard,intr,rsize=32768,noatime,nocto,proto=tcp,bg 0 0 The issue is that from time to time the "/home/bigdir" folder will be empty, even though mtab think that the share is still mounted. nfsstat et. al. do also think the share is still mounted. Only thing that works is by unmounting, and then (re)mounting the bigdir share. The server side is a NetApp. The client side is RHEL5.5, 2.6.18-194 kernel (Yes, I know 5.8 is out, but as far as I can see there are no erratas for this particular issue). I can use various hacks like automount, or mounting it to another path and then using --mount bind, but I would like to fix the underlying issue. -- Best regards Espen Fjellvær Olsen

    Read the article

  • Virtualization or Raw Metal?

    - by THE
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-fareast-language:EN-US;} With the growing number of customers who want to run the Oracles EPM/BI (or other Fusion Middleware Software) in a virtualized environment, we face a growing number of people asking if running Oracle Software within VMware is supported or not. Two KM articles reflect Oracles policy towards the use of VMware: 249212.1 and 475484.1 . The bottom line is: “you may use it at your own risk, but Oracle does not recommend it”. So far we have seen few problems with the use of VMware (other than performance and the usual limitations) but Oracle does not certify its software for the use in VMware (and specifically for RAC Software actively refuses any support) and any issue that may occur will be fixed for the native OS only. It is on the customer to prove that the issue is NOT due to VMware in case that an issue is encountered. See: “Oracle Fusion Middleware Supported System Configurations page” And also “Supported Virtualization and Partitioning Technologies for Oracle Fusion Middleware”

    Read the article

  • How do you compare job offers from companies in different countries?

    - by Danny Tuppeny
    This isn't really a programmer-specific question, but I'm not sure of a more appropriate place, and I think the users of this site are best able to answer the question in the context of programmers. Relocating to the US seems fairly common in the programming industry. I live in the UK, and maybe one day, I might do it too. So, if that day comes - how would you go about comparing job offers? Benefits are fairly easy to compare, but given the differences in cost of living, how would you go about comparing salaries and the quality of living you'll have? In a country where the cost of living is lower, you might be able to accept a lower salary (based on exchange rate) and still have the same quality of living. But what can you do to ensure this? In some cases, you may even take a "pay rise" in terms of exchange rate, but end up far worse off. How can you compare job offers across different countries to get an idea of the salary you would need in order to not feel you've gone "backwards"?

    Read the article

  • Sabnzbd Installed on Linux NAS

    - by Mike Szp.
    I installed SABnzbd on a linux formatted NAS. Now the directory it downloads to is mapped differently on the NAS itself, because the path that SABnzbd knows about starts in it's own folder. If this sounds confusing let me give you an example: \\MYNAS\Volume_1\ That is the path of the drive on the NAS. I would like my SABnzbd downloads to go to: \\MYNAS\Volume_1\Downloads Right now SABnzbd is installed to: \\MYNAS\Volume_1\ffp\opt\optware\share\SABnzbd And the default download directory (as indicated in SABnzbd is): /ffp/opt/optware/share/SABnzbd/downloads/complete I know that the mapping is different somehow because It is installed on the NAS, but I just am lost as to what I should do. So far, I have tried for the complete folder: /192.168.restofip/Volume_1/downloads/complete /Volumes/Volume_1/downloads/complete /Volume_1/downloads/complete Does anyone know how to change the path so that I can have it download to one of the topmost folders on the NAS instead of having it download to a folder so deep in the drive?

    Read the article

  • How do you sync photos with your family?

    - by romkyns
    We've always been trying to share photos within our family, despite all living in different countries. This has been very challenging. We have about 50GB of photos that we share with each other. Everyone organizes them differently, so rsync/Syncplicity don't work. Everyone likes their photos on the fast hard drive rather than a slow website with reduced quality, so online sharing websites are a no-go. So far we have been essentially syncing them manually, via a shared folder that new photos are placed to and collected from. This is laborious and prone to errors, that usually leave one of us without all the photos without even knowing. To those who are in a similar situation, how do you solve this?

    Read the article

  • Simple NAS setup for Ubuntu

    - by Sean Houlihane
    So, I want to connect my shiny new NAS (QNAP TS-210p II) to my Ubuntu 11.10 box. I have a 2nd Ubuntu machine, but I don't use that enough to be too worried about it. Use is ideally to offload all storage except for system (which can then run off a 30GB flash drive). The main machine also runs Myth front/backends. Both are on a wired network currently. I have got a basic setup working, mounted over NFS, but have some issues, and whenever I look for answers, I seem to get unto the UbuntuServer domain, with what seems like more detail than I want in an answer. Questions I have identified so far: 1) Sharing UIDs to get file permissions correct. LDAP or something else? What is necessary to make this work? 2) Mounting an NFS device reliably. Do I just add it in fstab or is some sort of auto-mount advisable? Its a wired network, but I shouldn't need to worry about reboot after power-outages and sequencing...

    Read the article

  • How can I copy files from a DVD skipping corrupt files?

    - by ujjain
    I have so far tried the following software solutions without success. Windows Explorer Copy (default) TeraCopy Roadkil's Unstoppable Copier Unstoppable Copier stops copying as a whole, it keeps continuing the same file after 20 minutes, despite any settings. I have spent much time Googling, but the one program ("Unstoppable Copier") that people recommended, did not to the trick as it keeps choking on the same file despite a setting of "Fasteset Data Recovery", "Maximum Retries 0" and "Undamaged Files First". The back side of the DVD has some scratches. What options do I have? I have access to both Ubuntu and Windows 7.

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >