Search Results

Search found 28016 results on 1121 pages for '! original content'.

Page 761/1121 | < Previous Page | 757 758 759 760 761 762 763 764 765 766 767 768  | Next Page >

  • Is Unix not a PC Operating System?

    - by Corelgott
    I am doing my Bachelor at a university. In a written assignment the professor posted the task: "Name 3 PC-Operating Systems". Well, I went on an included a variety of OS (Linux, Windows, OSx) including Unix & Solaris. Today I recieved a mail from my prof saying: Unix is not a PC-Operating System. Many Unix-variants are not PC-hardware compatible (like AIX & HP-UX. About Solaris: there was one PC-compatible version...) I am kind of suprised: Even if may Unix-variants are Power-PC and different bit-order – Those don't stop being PCs now, right? The question was given in a written assigment! It was not a question that came up during lecture! Due to the original task being in German, I'll include it just to make sure nobody suspects an error in the translation. Frage: Nennen Sie 3 PC-Betriebssysteme. Antwort: Unix ist kein PC-Betriebssystem, viele Unix-Varianten sind nicht auf PC-Hardware lauffähig (AIX, HP-UX). Von Solaris gab es mal eine PC-Variante.

    Read the article

  • Split MPEG video from command line?

    - by Tim
    I have a homemade DVD that I'm effectively trying to insert chapters into and rearrange - the original author burned it as one long chapter, and I'd like to rip it into smaller pieces and re-encode it into a new DVD. I ripped the DVD with the following command: mplayer dvd:// -dvd-device /dev/sr2 -dumpstream -dumpfile raw.vob I'm running Gentoo Linux with mplayer version 1.0-rc2_p20090731 (the latest available in Portage). I have a list of times that the chapters are supposed to span (for example 30:11-33:25), so my first thought was to rip the entire DVD and use mpgtx to cut out certain pieces of the file. My issue is that running mpgtx -i on the file reports quite a few timestamp jumps: Time stamps jumped from 59.753789 to 0.001622 at position 1d29800 Time stamps jumped from 204963823030450.343750 to 31.165900 at position 2d4f800 Time stamps jumped from 60.077878 to 0.001622 at position 43cc000 Time stamps jumped from 60.024233 to 0.001622 at position 65c5000 Time stamps jumped from 204963823068631.718750 to 52.549244 at position 7fd1000 I've tried to fix the indexes using: mencoder raw.vob -oac copy -ovc copy -forceidx -o fixed.vob -of mpeg But mpgtx will still report timestamp issues. My immediate question: is there a way to take the ripped movie I have and correct its timestamps so I can cut it with mpgtx? If I can get that one issue out of the way, building the rest of the DVD will be smooth sailing. If it's not possible to fix the timestamps on this file: is there a better way to rip small chunks of the DVD into separate files for recompilation later? I'd very much like this to be done on Linux, and it'd be even better if I could script it somehow (feed in a list of start and end positions, or start times and durations, and get out a series of ripped files). If need be, I also have a Mac OS X machine available, but no Windows. Edit: I wound up finding another solution involving HandBrake and ffmpeg (with help from this question), but the question stands. Edit again: Turns out my other solution didn't quite work - the audio desynchronized by about five seconds, in about half of my cut mpgs - so I'm back to square one. Anyone?

    Read the article

  • How to get nginx to pass HTTP_AUTHORIZATION header to Apache

    - by codeinthehole
    Am using Nginx as a reverse proxy to an Apache server that uses HTTP Auth. For some reason, I can't get the HTTP_AUTHORIZATION header through to Apache, it seems to get filtered out by Nginx. Hence, no requests can authenticate. Note that the Basic auth is dynamic so I don't want to hard-code it in my nginx config. My nginx config is: server { listen 80; server_name example.co.uk ; access_log /var/log/nginx/access.cdk-dev.tangentlabs.co.uk.log; gzip on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 120; location / { proxy_pass http://localhost:81/; } location ~* \.(jpg|png|gif|jpeg|js|css|mp3|wav|swf|mov|doc|xls|ppt|docx|pptx|xlsx|swf)$ { if (!-f $request_filename) { break; proxy_pass http://localhost:81; } root /var/www/example; } } Anyone know why this is happening? Update - turns out the problem was something I had overlooked in my original question: mod_wsgi. The site in question here is a Django site, and it turns out that Apache does get the auth variables passed through, however mod_wsgi filters them out. The resolution is to use: WSGIPassAuthorization On See http://www.arnebrodowski.de/blog/508-Django,-mod_wsgi-and-HTTP-Authentication.html for more details

    Read the article

  • Backing up Excel Files to a different Directory

    - by Joe Taylor
    In Excel 2007 in the Save As box there is an option to 'Create a Backup' which simply backs up the file whenever it is saved. Unfortunately it backs up the file to the same directory as the original. Is there a simple way to change this directory to another drive / folder? I have messed about with macros to do this, coming up with: Private Sub Workbook_BeforeClose(Cancel As Boolean) 'Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean) 'Saves the current file to a backup folder and the default folder 'Note that any backup is overwritten Application.DisplayAlerts = False ActiveWorkbook.SaveCopyAs Filename:="T:\TEC_SERV\Backup file folder - DO NOT DELETE\" & _ ActiveWorkbook.Name ActiveWorkbook.Save Application.DisplayAlerts = True End Sub This creates a backup of the file ok the first time, however if this is tried again I get: Run-Time Error '1004'; Microsoft Office Excel cannot access the file 'T:\TEC_SERV\Backup file folder - DO NOT DELETE\Test Macro Sheet.xlsm. There are several possible reasons: The file name or path does not exist The file is being used by another program The workbook you are trying to save has the same name as a... I know the path is correct, I also know that the file is not open anywhere else. The workbook has the same name as the one I'm trying to save over but it should just overwrite. I have posted the question about the coding on Stack Overflow but wondered if there is an easier way to do this. Any help would be much appreciated. Joe

    Read the article

  • Tidy up old Windows Server Backup snapshots

    - by dty
    Hi, I'm running wbadmin from a scheduled job, backing up my C: and D: drives to my E: and (I believe!) including the system state: wbadmin start backup -backuptarget:e: -include:c:,d: -allCritical -noVerify -quiet I'd like to delete old backups, but I'm concerned that all the information I can find says to use wbadmin to delete old system state backups, and vssadmin to delete other backups. As far as I know, my backups ARE system state backups, but are using VSS on E: for storage, so I'm worried about trying either of these techniques for fear of losing all my backups. This is a home network, so I don't have a spare server to test this on. I'm also happy to simply restrict the space used on E:, but I can't make sense of the difference between the /for and /on parameters of the relevant vssadmin command. For reference, here's the output of vssadmin show shadows: Contents of shadow copy set ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Contained 1 shadow copies at creation time: 07/01/2011 08:12:05 Shadow Copy ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Original Volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy83 Originating Machine: x.y.com Service Machine: x.y.com Provider: 'Microsoft Software Shadow Copy provider 1.0' Type: DataVolumeRollback Attributes: Persistent, No auto release, No writers, Differential [... repeated a lot...] vssadmin show shadowstorage: Shadow Copy Storage association For volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 5.859 GB Shadow Copy Storage association For volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 40.317 GB Shadow Copy Storage association For volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 168.284 GB Allocated Shadow Copy Storage space: 171.15 GB Maximum Shadow Copy Storage space: UNBOUNDED wbadmin get versions: Backup time: 07/01/2011 03:00 Backup target: 1394/USB Disk labeled xxxxxxxxx(E:) Version identifier: 01/07/2011-03:00 Can Recover: Volume(s), File(s), Application(s), Bare Metal Recovery, System State [... repeated a lot...]

    Read the article

  • Windows 7 boot issues

    - by Michael
    Ok, I tried to install linux and dual boot my laptop with 7 ultimate. I messed up. When I tried to boot to 7 it said no. Something along the lines of device not found. So I being young and stupid I uninstalled linux which I could boot into, and I still could not boot to windows. Next step was to run the startup fixes from the boot cd. Swing and a miss, I also ran the fixmbr and fixboot. Which brings us up to my current place. I installed 7 again on my blank partition in hopes I could access my other partion. No dice. So my question to yall is how can I fix my original filesystem or at least get to the stuff on it. In the new 7 install the old partion does not even have a drive letter. That is my sad story any help would be apreciated.

    Read the article

  • OS X: MySQL not dealing properly with data directory via soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

  • Outlook certificate error and separate send/receive error

    - by Richard
    I run a laptop with Vista 32bit and MS Office 2010. Outlook has two profiles, both configured as POP3/SMTP and neither go through an exchange server. Recently, one of the mail servers (hosted with easily) was getting full, so I changed the profile setting to delete from the server if mails are older than 60 days. Suddenly, I am now experiencing a couple of glitches. The first is that I get a certificate error when outlook tries it's first send/receive under the relevant profile - "The server you are connected to is using a security certificate that cannot be verified" This continues despite apparently successfully re-importing the certificate. The second glitch is that I get a "Sending reported error (0x8004010F): 'Outlook data file cannot be accessed'" error on send receive. Strangely, it seems to be trying to send/receive twice - once to 'mail@domain', which works, and the second to 'domain' which doesn't. I've tried deleting the profile and re-creating it, pointing to the original .pst file, but still get both errors. Does anybody know how I can resolve these errors? (As a by note, and not that important, more for curiosity, does anybody know why simply changing the delete from server setting against that profile would cause these issues?)

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • How can I make Internet Explorer 6 render Web pages like Internet Explorer 11?

    - by gparyani
    Now, I know that this may seem like a bad question in that I can just upgrade to Internet Explorer 8, but I am sticking with IE6 in that IE8 removes valuable features, like the ability to save favorites offline and the fact that a file path turns into a Windows Explorer window and typing a Web address into Windows Explorer changes it into an IE window. I know that Internet Explorer 6 does a really bad job at rendering some pages. I know of the Google Chrome Frame extension that brings Chrome-style rendering into IE, but that will soon be discontinued. So, I tried another thing: I know that C:\Windows\System32\mshtml.dll contains the Trident rendering engine that is used by IE, so I tried something: I first backed up the original file by renaming it on Windows XP to mshtml-old.dll, then I tried to copy in the DLL from a computer running Windows 7 with Internet Explorer 10. I noticed that, after copying, the system had replaced the new DLL with the old one, but left the one I backed up intact. Is there any way I can get the system to not replace the DLL like that so that I can transfer in IE11's mshtml.dll into Windows XP and make IE6 render like IE11? I'm looking for an answer that describes how to tweak my system to make IE6 render like IE11 (or IE10), not one that tells me to upgrade IE or install another browser. I don't care how tedious the method is, just as long as it works.

    Read the article

  • I receive email not addressed to me - virus?

    - by Anne
    Every once in a while I receive email (on Gmail) that isn't addressed to me. Gmail puts it in the spam box, because it 'can't verify that it has been sent by [sender]'. The emails, when opened, contain confidential information about deliveries and paid bills (it does look an awful lot like 'real' mail from well-known companies, and it doesn't look like a scam, since the mail is informative - they give information instead of asking for credit card numbers ;-)), and I even got an email from "Facebook" that I requested a password change and that I have to 'click here' to change the password for [email address that isn't mine]. I am not the only addressee, there seems to be a whole list of Gmail addresses beginning with 'a'. The original addressee obviously has some sort of virus, and now I wonder if this could be a risk for me too. Is my email being sent around without my knowing too? I am not the kind of person who randomly clicks on shady links - I am very careful on the internet - but maybe there are other ways of catching viruses? Is there something I should do/check? Thank you for your help!

    Read the article

  • Installing Windows 7 over PXE, preferably with domain autojoin

    - by Ivan Vucica
    At an educational non-profit, I've inherited a previously set-up Windows domain that, after the first reinstall of the machines, we ended up not using by simply not joining machines back into the domain. Over last summer, before the annual reinstall for shipping machines to the summer school, I toyed with the idea of installing Windows 7 over network, instead of just imaging the machines. It took a bit longer than I expected to figure out the basics; honestly, I expected that Windows would be more friendly for PXE installation out of the box. What I'm interested in is best practices for installing Windows 7 over PXE with domain autojoin. I'd love it if the whole setup could optionally be hosted on a UNIX based system as well. I've had some success by preparing an ISO using Windows Deployment Kit, and loading the ISO into memory. This was needed since I wanted a menu, and I think I couldn't get PXELINUX to chainload into Windows' bootloader. Unfortunately, I couldn't figure out much about customization of the Windows setup in that timeframe nor could I get Samba to work properly; studying the stuff ended up being too lengthy, especially the portion where I edited a disk image on Windows and copied it outside. WDK didn't make things easier by mounting the disk image into RAM, and writing it in its entirety when done with it, making me a very sad boy. I've recently found a different approach, too, that appears to be closer to Microsoft's original idea for netboot deployment and does not involve ISOs. So my question boils down to the following. What exact approach do you use for netbooting Windows 7 setup? How can Windows 7 setup be best customized to be completely unattended, including installation on specific system partition and not destroying the data partition, creation of passworded admin and default user, choice of MAC-address-based hostname, and joining a domain? As much details as possible for everyone's future reference would be appreciated. WDS isn't a bad choice, but if a Linux-based install can be used, that'd be better.

    Read the article

  • One-To-Many Powershell Scripts

    - by Matt
    I'm trying to create a script to run as a scheduled task, which will run against multiple servers and retrieve some information. To start with, I populate the list of servers by querying AD for all servers that match a certain set of criteria, using Get-ADComputer. The problem is, the list is returned as an object, which I can't then pass to the New-PSSession list. I have tried converting it to a comma-seperated string by doing the following: foreach ($server in $serverlist) {$newlist += $server.Name + ","} but this still doesn't work. the alternative is to iterate through the list and run the various commands against each server one at a time, but my preference would be to avoid this and run them using one-to-many remoting. UPDATE: To clarify what I want to end up being able to do is using -ComputerName $serverlist, so I want $serverlist to be a string rather than an object. UPDATE 2: Thanks for all the suggestions. Between them and my original method I'm starting to wonder whether -ComputerName can accept a string variable? I've got varying degrees of success getting the list of computers converted to a comma separated string, but no matter how I do it I always get invalid network address.

    Read the article

  • tod to avi mpg wmv, convert tod (.mpeg-2) to avi mpg wmv for Movie Maker.

    - by yearofhao
    Need to convert .tod (mpeg-2) to avi mpg wmv download from JVC Everio to PC with tod to avi mpg wmv converter convert tod to uncompressed/raw avi, mpg wmv Have a JVC Everio camcorder? Then you may encounter problems when saving the .tod files to your computer windows movie maker says it can't recognize and edit them to make videos. You may play them using media player but the problem is how to edit them? The bundled software Power cinema could be annoying, since you can only edit when the camera is plugged in to the PC - Power cinema can’t seem to edit from the saved clips alone. So, how do you save them to PC so that you can edit them without the camera and also using windows movie maker? JVC Everio Tod to avi mpg wmv converter costs you a penny to but help you perfectly convert tod file to AVI, MPG, WMV, YouTube FLV, MP4, DV, QuickTime.MOV or other common video formats with fast speed and while keeping the original HD quality. High definition TOD recordings from JVC Camcorders can playback fluidly, convert smoothly and edit professionally on with iOrgsoft TOD file converter iOrgsoft tod to avi mpg wmv Converter has been mostly used by Windows users who use Windows 7 or Vista, after the .tod (mpeg-2 the same codec) downloaded from JVC Everio to PC, it’s best to convert tod to avi, convert to xvid divx, convert tod to uncompressed avi or convert tod to raw avi, tod to mpg, tod to wmv, which are three Windows movie maker best formats to import. TOD to avi mpg wmv converter is a competent video-editing program that allows you to clip/cut TOD video clip, crop the video to encode, and help transfers video to devices like iPhone, iPod, to HDTV connected with Apple TV.

    Read the article

  • RAM ok in memtest86+ == RAM ok after wake from sleep?

    - by twon33
    I have a Windows XP (32-bit) system that appears stable in normal operation, but was repeatably freezing (hard lock, no BSOD) a minute or so after waking from S3 sleep. Some Googling against the motherboard model and memory manufacturer suggested that I might need to bump up the memory voltage, so I tried it and it now seems to resume without freezing. However, I don't really trust it and I'd like to validate that it's actually stable, especially after resuming from sleep. I've run Prime95 for a few hours with no issues, and am planning an overnight run of Memtest86+, which I expect to pass because the system has been solid whenever I've run it without putting it to sleep. Does something like Memtest86+ exist that actually invokes S3 sleep during operation? Clearly it would need an operator to wake the computer to resume testing, but I don't think I've ever heard of a memory test tool that can do this. Alternately, am I wasting my time? Should a clean bill of health from Memtest86+ indicate stability regardless of whether sleep is involved, or, conversely, does my original problem indicate that Memtest86+ would have failed eventually with the stock voltage if I'd run it, sleep or not?

    Read the article

  • Mailman delivery troubles

    - by stanigator
    I'm not sure if this is a good place to ask this question. It's about mailing list management software called Mailman from GNU. Here are the details: Hosting provider: Vlexofree Domain: www.sysil.com with Google Apps Mailing List created from hosting cpanel: [email protected] I have registered a list of subscribers, and tried sending an email to [email protected]. I got the following error message: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://mail.google.com/support/bin/answer.py?answer=6596 23si6479194ewy.44 (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.216.90.136 with SMTP id e8mr1469147wef.110.1264220118960; Fri, 22 Jan 2010 20:15:18 -0800 (PST) Date: Fri, 22 Jan 2010 20:15:18 -0800 Message-ID: <[email protected]> Subject: From: Stanley Lee <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=0016e6dab0931bccc3047dcd2f1e - Show quoted text - Is there any way of fixing this problem? I would like to be able to have this mailing list to work through my hosting and domain. Thanks in advance.

    Read the article

  • How do I stop panning on a monitor that supports a specific resolution?

    - by IronicMuffin
    Hi all, I've been battling this for a few days now. Any and all help is appreciated. I have a planar monitor with a native res of 1280x1024. At one point, I had used PowerStrip to override "something" and set the res to 1600x1200, and it worked great. I then installed new intel graphics drivers for my 86895g (or w/e model) video card, which screwed up whatever settings I had. If I set it to 1600x1200 this time, it would set the res correctly, but give me a 1280x1024 viewport and the screen would pan when the mouse got to the edges of the screen. Absolutely not useful. Ok, so I was limited to 1280x1024 now. W/e. Now...enter new video card with two video ports. I have two monitors now and the latest nVidia drivers. I decide to try to get dual 1600x1200 going...ended up screwing the original monitor up so much now that it's at 1280x1024, with a 1024x768 viewport and panning! Absolutely not usable now. So what I need, and I can't seem to find on any forums, is help doing one or more of the following: Clearing out all monitor/edid info out of the windows registry without corrupting the registry. Actually correctly override the EDID values and get my sweet res back. Some other way of getting back to at least dual 1280x1024 with NO panning. Note: My device manager shows 4 monitors for some reason. My registry shows entries for all sorts of monitors that have been hooked up to the machine over the years. It's making it difficult to debug. Experience with PowerStrip would be helpful. I've been mucking with Phoenix EDID designer and MonInfo as well, but I'm stumbling around in the dark with these. Windows XP SP2 nVidia GeForce 6200 nVidia drivers: v258.96 Monitor: Planar PL 1910M Thanks!

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • converting a png with an ICC profile?

    - by jedierikb
    I can convert a jpg from one ICC to another ICC. convert rgb_image.jpg -profile USCoat.icm cmyk_image.jpg Or I can convert a jpg with no ICC to another ICC. convert rgb_image.jpg +profile icm \ -profile sRGB.icc -profile USCoat.icm cmyk_image.jpg But how do I convert a png's pixels into the gamut described by an ICC profile? I understand I cannot embed the profile into the image file, but would at least like to convert the colors. When I reuse the above commands, the colors come out wrong... (different from the colors in the JPG when converted). This is the source image: http://alumni.media.mit.edu/~erikb/tmp/RED_JPG.jpg And here is what I am trying: convert RED_JPG.jpg +profile icm -profile sRGB_v4_ICC_preference.icc -profile USWebUncoated.icc CMYK_PNG.png and this is what I am getting: http://alumni.media.mit.edu/~erikb/tmp/CMYK_PNG.png I was hoping to get an image with the same colors as a JPEG run through the same command: convert RED_JPG.jpg +profile icm -profile sRGB_v4_ICC_preference.icc -profile USWebUncoated.icc CMYK_JPG.jpg resulting in: http://alumni.media.mit.edu/~erikb/tmp/CMYK_JPG.jpg *this image, CMYK_JPG.jpg, is what I am trying to reproduce pixel by pixel in a PNG file.* Any suggestions? Original (unanswered) post here.

    Read the article

  • Testing realistic loads for new versions of existing web app

    - by David Cournapeau
    Assuming I have a relatively complex web application, I am interested in testing performances of a new version using a traffic as realistic as possible. Traffic is relatively complex (session-based, lots of internal logic which depends on incoming requests). The webapp depends on many servers (databases, frontends, etc...). I can think of two basic directions: Recording every incoming request with its timestamp in production in a centralized manner and replaying it from N clients to reproduce a load as close as possible as the original. Issue: because we have many servers, getting the centralized log is not trivial. having a system duplicating requests to a staging area so that I could "plug" a dev version of my webapp to it at anytime without affecting the production. Issue: I have not found much information about it expect this, which suggests to me that may not be the best solution. OTOH, it is realistic by definition. What is the standard way of doing this kind of testing ? I did not find much information about load testing with complex, realistic traffic.

    Read the article

  • Server freezes while installing Redhat Enterprise Linux Server 6

    - by eisaacson
    We've tried both the first options Install or upgrade an existing system Install system with basic video driver When trying option #1, it gets to a screen that has a solid cursor about halfway down, then freezes. When trying option #2, it freezes at the point where it says: Waiting for hardware to initialize... Of course, we bought the unsupported version and haven't found anything to help us so far. Here are the specs to the server in the original post: ASUS P8Z68-M Pro LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard with UEFI BIOS RAIDMAX Reiter ATX-305WBP Black Steel / Plastic ATX Mid Tower Computer Case 450W Power Supply Intel Core i7-2600 Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor Intel HD Graphics 2000 BX80623I72600 16GB Ram OCZ Agility 3 SSD 120GB From some of the posts out there could the UEFI Bios or the Sandy Bridge processor be a culprit here? We just tried the DVD on a different computer and it got past that point with ease. It's a standard Dell build compared to our custom machine. Could it be having difficulty recognizing drivers? How do we get past that?

    Read the article

  • SSH Private Key Not Working in Some Directories

    - by uesp
    I have a strange issue where SSH won't properly connect with a private-key if the key file is in certain directories. I've setup the keys on a set of servers and the following command ssh -i /root/privatekey [email protected] works fine and I login to the given host without getting prompted by a password, but this command: ssh -i /etc/keyfiles/privatekey [email protected] gives me a password prompt. I've narrowed it down that this behavior occurs in only some sub-directories of /etc/. For example /etc/httpd1/ gives me a password prompt but /etc/httpd/ does not. What I've checked so far: All private key files used are identical (copied from the original file). The private key file and directories used have identical permissions. No relevant error messages in the server/client logs. No interesting debug messages from ssh -v (it just seems to skip the key file). It happens with connecting to different hosts. After more testing it is not the actual directory name. For example: mkdir /etc/test cp /root/privatekey /etc/test ssh -i /etc/test/privatekey [email protected] # Results in password prompt cp /root/privatekey /etc/httpd # Existing directory ls -ald test httpd # drwxr-xr-x 4 root root 4096 Mar 5 18:25 httpd # drwxr-xr-x 2 root root 4096 Mar 5 18:43 test ssh -i /etc/httpd/privatekey [email protected] # Results in *no* prompt rm -r test cp -R /etc/httpd /etc/test ssh -i /etc/test/privatekey [email protected] # Results in *no* prompt` I'm sure its just something simple I've overlooked but I'm at a loss.

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • CloneZilla Broke My System? Ubunut Installation Lost After Running CloneZilla

    - by nicorellius
    I just read through this post and tried to get my installation back using this answer to no avail. What happened to me is this: I spent an hour or more reading through the CloneZilla docs. I thought I was ready to test it out so I burned the disc with the ISO image on it and ran it. The system I used was Ubuntu 10.04, 32-bit. Everything seemed to go fine. I made a clone of my first partition and copied it to my second partition. I followed the instructions, removed the disc and rebooted my system. At this point, I would expect to have two bootable Linux installations, identical to one another. However, upon booting, I got this error message: error: no such device: 4cf1a6ef-xxxx-xxxx-xxxx-4e3a3ce92bcd error: file not found I booted from a Live Ubuntu disc and was able to see my to partitions: 4cf1(1) and 4cf1(2) (abbreviated, because the volumes have long numbers to identify them). The 50 GB partition, on which the original Ubuntu installation sits is the number and the second partition (175 GB) is the same number with an "_" at the end. I could browse the disc partitions and see the files, but I'm not sure what to do next. I know there is a way to restore my grub loader and actually boot either of these installations, but my Linux know-how is limited. Can I edit the boot loader file to fix this problem? The only clue I have is CloneZilla said something about making a new GRUB but I thought it was going to basically modify it so I could boot either installation. Not sure what happened. I am going to look through this post for the time being to see if I can learn anything to help my problem. But I thought that, since this happened as a result of using CloneZilla, it may be a unique question for this board.

    Read the article

  • Help setting up an secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

< Previous Page | 757 758 759 760 761 762 763 764 765 766 767 768  | Next Page >