Search Results

Search found 13488 results on 540 pages for 'calculator date calculation'.

Page 435/540 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • DMG mounting warning message says "it may make computer less secure or cause other problems"

    - by Cawas
    When I try to open a DMG file I get this: I'll just transcript the image: There may be a problem with this disk image. Are you sure you want to open it? Opening this disk image may make your computer less secure or cause other problems. What does that mean in fact? What's really wrong with it, and what kind of problem can it cause just by mounting? Someone said: When you download a file in Leopard (and Snow Leopard), it's marked as a quarantined file. This occurs by the OS adding an attribute to the file, tagging where it came from (such as "downloaded by Safari"). This is what causes the user to see prompts when running files that were downloaded from the Internet, you may remember being asked to confirm you'd like to launch program XXX downloaded by Safari on XXX date. As a new part of Snow Leopard, files which are tagged with the quarantine attribute also have integrity checked by fsck, and if that verify fails you will see the message you described, triggered by an unused node in the disc image. But really, I didn't get that. What's quarantine? I've just downloaded a file here on SL, tried to open, and got that warning. Apple have a say about quarantine files, and they seem to work the same on Leopards. Plus I have got that file using Google Chrome while that feature seems to work just with Safari.

    Read the article

  • mod_deflate doesn't work [closed]

    - by kikio
    I want to gzip my static files. so put this in .htaccess: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </IfModule> and looked for mod_deflate in phpinfo() output Loaded Modules section, and I found it. But when I track server responses with Firebug, no gzipped file can be found: HTTP/1.1 200 OK Date: Sat, 08 Sep 2012 21:41:21 GMT Last-Modified: Sat, 08 Sep 2012 21:26:04 GMT Accept-Ranges: bytes Cache-Control: max-age=604800 Expires: Sat, 15 Sep 2012 21:41:21 GMT Vary: Accept-Encoding Keep-Alive: timeout=3, max=50 Connection: Keep-Alive Content-Type: text/css Content-Length: 18206 What's the problem? I'm sure I have mod_deflate enabled (according to php apache_get_modules()). UPDATE: the request headers: GET /d/jquery-ui.css HTTP/1.1 Host: 127.0.0.1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Pragma: no-cache Cache-Control: no-cache

    Read the article

  • Is the sysadmin/netadmin the defacto project planner at your organization?

    - by gft74
    At my company it has somehow over the past few years slowly become my job to come up with a project plan, milestones and time lines for deployment of developer applications. Typical scenario: My team receives a request for a new website/db combo and date for deployment. I send back a questionnaire for the developer to fill out on all the reqs for the site (ssl? db? growth projections etc.) After I get back all the information, the head of development wants a well developed document of what servers will it live on why those servers what is the time line for creating the resources step-by-step SOP for getting the application on the server and all related resources created (dns, firewall, load balancer etc.) I maybe just whining but it feels like this is something better suited to our Project Management staff (which we have) or to the developer. I understand that I need to give them a time-line on creating the resources, but still feel like this is overkill. We already produce documentation on where everything lives and track configuration changes to equipment. How do other sysadmin folks handle this?

    Read the article

  • Server Directory Not Accessible

    - by GusDeCooL
    I got strange things happen on live server, but normal in local server. My local server is using mac, and my live server is linux. Consider i try to access some files http://redddor.babonmultimedia.com/assets/images/map-1.jpg This work correctly. http://redddor.babonmultimedia.com/assets/modules/evogallery/check.php Return 404, I'm pretty sure my file is in there and there is no typo mistake. How come it give me 404? There is only one .htaccess on the root server and it's configuration is like this. # For full documentation and other suggested options, please see # http://svn.modxcms.com/docs/display/MODx096/Friendly+URL+Solutions # including for unexpected logouts in multi-server/cloud environments # and especially for the first three commented out rules #php_flag register_globals Off #AddDefaultCharset utf-8 #php_value date.timezone Europe/Moscow Options +FollowSymlinks RewriteEngine On RewriteBase / <IfModule mod_security.c> SecFilterEngine Off </IfModule> # Fix Apache internal dummy connections from breaking [(site_url)] cache RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC] RewriteRule .* - [F,L] # Rewrite domain.com -> www.domain.com -- used with SEO Strict URLs plugin #RewriteCond %{HTTP_HOST} . #RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] #RewriteRule (.*) http://www.example.com/$1 [R=301,L] # Exclude /assets and /manager directories and images from rewrite rules RewriteRule ^(manager|assets)/*$ - [L] RewriteRule \.(jpg|jpeg|png|gif|ico)$ - [L] # For Friendly URLs RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Reduce server overhead by enabling output compression if supported. #php_flag zlib.output_compression On #php_value zlib.output_compression_level 5

    Read the article

  • Configure nginx for multiple node.js apps with own domains

    - by udo
    I have a node webapp up and running with my nginx on debian squeeze. Now I want to add another one with an own domain but when I do so, only the first app is served and even if I go to the second domain I simply get redirected to the first webapp. Hope you see what I did wrong here: example1.conf: upstream example1.com { server 127.0.0.1:3000; } server { listen 80; server_name www.example1.com; rewrite ^/(.*) http://example1.com/$1 permanent; } # the nginx server instance server { listen 80; server_name example1.com; access_log /var/log/nginx/example1.com/access.log; # pass the request to the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://example1.com; proxy_redirect off; } } example2.conf: upstream example2.com { server 127.0.0.1:1111; } server { listen 80; server_name www.example2.com; rewrite ^/(.*) http://example2.com/$1 permanent; } # the nginx server instance server { listen 80; server_name example2.com; access_log /var/log/nginx/example2.com/access.log; # pass the request to the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://example2.com; proxy_redirect off; } } curl simply does this: zazzl:Desktop udo$ curl -I http://example2.com/ HTTP/1.1 301 Moved Permanently Server: nginx/1.2.2 Date: Sat, 04 Aug 2012 13:46:30 GMT Content-Type: text/html Content-Length: 184 Connection: keep-alive Location: http://example1.com/ Thanks :)

    Read the article

  • Thunderbird alerts when expected email does not arrive

    - by user871199
    I am on Ubuntu 12.04 using Thunderbird as email client. Both are up to date in terms of updates. I have bunch of nightly jobs that do the work and send a status mail. It gets tedious if you keep getting same/similar mails every day so I ended up writing a mail filter rule which causes emails to end up in their respective folders automatically. If things are going ok, I really don't need to read emails. Failure emails are sent to different alias - if the job runs. We recently discovered that one of the job had not run for few days as someone accidentally disabled it. In order to avoid such problems in future, I would like to setup thunderbird in such a way that if I don't get email from given address within given duration, it should alert me. My dream solution is to set up frequency - some jobs do run every 4 hours. Is this possible? Can I setup Thunderbird (preferred) or other email client for reminding me when expected email does not show up. Based on comments and answer I received, here are the reasons why I would like to use Thunderbird. We are already using Thunderbird. It has calender support via plugin, so I suppose someone is already watching time to remind us about the event. May be this another type of event. Additional job is one more failure point, may complicate life if it has to monitor multiple hosts. Additional tools - same thing, one more failure point. Thunderbird can be run across all the platforms we are using - Windows and Ubuntu. It sort of becomes platform independent solution.

    Read the article

  • No digital audio output with Asus Xonar DG

    - by Lunatik
    I've purchased an Asus Xonar DG as replacement for faulty onboard audio in a Medion 8822 as it has an optical output which is all I really need to feed my HTPC. I uninstalled the previous drivers/devices, switched the PC off, inserted the Asus card, powered up, disabled the onboard audio in the BIOS, then installed the driver that came on the CD (same version as on Asus' website as of today) and everything went perfectly - no errors. I set the audio devices up in Windows and in the Asus utility (SPDIF enabled, 6-ch audio) as I would expect to see them work, but the only thing is I have no digital audio from test tones within Windows/the Asus utility, PCM audio or Dolby Digital from DVD. Analogue audio is fine. I've uninstalled things and reinstalled a couple of times now, as well as trying almost all combinations of analogue/digital outputs but can't get it sorted. Does anyone have any tips on how to get this working? This card has just been released so there isn't much out there to go on. Notes: The light on the toslink port is lit. OS is Vista 32-bit SP2 and all up to date, pretty much a fresh install with almost no 3rd party applications installed This page seems to suggest that a digital output device in Windows is not needed with Xonar cards as it was with the previous Realtek so I have it set to Analog. The only other output device is S/PDIF pass-thru

    Read the article

  • Why is my Network Connections list empty?

    - by DealerNextDoor
    I'm sure this question has been asked before, but none of the links I have found have worked. I've been trying to find fixes for the past couple of weeks. This all started a few days after I got my router. At first, I thought it was just something that would fix it self. But as usual, it never does. I am trying to update my router's wireless card to try and fix this problem, but I need to get the card's information to update it on the HP website. And since my Network Connections list is empty, I can't get any information about it. So to get around this, I tried to go to 'Manage Wireless Networks', and when I tried to get the properties from there, I get this error: Windows has encounter an error saving the wireless profile. Specific Error: The data is invalid. So, what all can I do to try and fix this? Any help will be appreciated. EDIT: Sorry, forgot to put router info. Router Model: WNR1000v2-V2 Router Maker: NETGEAR Router Firmware Version: V1.0.0.12NA The router is up-to-date on all updates.

    Read the article

  • Windows clients unable to access Samba share on AD joined Linux box every 7 days

    - by Hassle2
    The problem: Every 7 days, 2 Windows Servers are unable to access a SMB/CIFS share. It will start working after a handful of hours. The environment: OpenFiler Linux box joined to 2003 AD Domain Foreground app on Win2003 server access the SMB/CIFS share with windows credentials Another process on Win2008 access the share via SQL Server with windows credentials The Samba version on the Linux box is 3.4.5. Security is set to ADS wbinfo and getent return back expected users and groups Does not look to be a double hop issue as it's always the 2 accounts, regardless of the calling user. There is a DNS entry in both forward and reverse lookup zone for the linux box The linux box's computer object in active directory shows that it was modified around/at the same time that the two clients started failing to access the share Trying to access the share via IP works when by name does not Rebooting the Windows server takes care of it (it's production and only restarted it once) Restarting smbd, winbind, nmbd had no effect Error in samba log for the client in question: smbd/sesssetup.c:342(reply_spnego_kerberos) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! The Question: Does this look like the machine account password is changing (hence the AD object showing the updated modified date) or are the two windows clients unable to request a new ticket that works against this linux box?

    Read the article

  • Munin with postgresql 9.2

    - by jreid9001
    I am trying to set up Munin to collect stats on a server with postgresql 9.1 and 9.2 (the server is currently running 9.1, have tested on a fresh VM with 9.2 to rule out some weird problem on the running server. I had to patch some of the plugins for 9.2 due to renamed columns (e.g. procpid to pid), but that's no problem). Munin is installed from the EPEL repos, postgres from the official one. Both up to date. When I try to run munin-node-configure --suggest, I get this output: # The following plugins caused errors: # postgres_bgwriter: # Junk printed to stderr # postgres_cache_: # Junk printed to stderr # postgres_checkpoints: # Junk printed to stderr # postgres_connections_: # Junk printed to stderr # postgres_connections_db: # Junk printed to stderr # postgres_locks_: # Junk printed to stderr # postgres_querylength_: # Junk printed to stderr # postgres_scans_: # Junk printed to stderr # postgres_size_: # Junk printed to stderr # postgres_transactions_: # Junk printed to stderr # postgres_tuples_: # Junk printed to stderr # postgres_users: # Junk printed to stderr # postgres_xlog: # Junk printed to stderr After a lot of searching around, I edited /etc/munin/plugin-conf.d/munin-node and added the following: [postgres*] user postgres This stops munin-node-configure complaining about stderr and lets me add the plugins, but when I telnet to the server on 4949 and try to fetch the stats, I just get "Bad exit". When I run the plugin individually via munin-run (e.g. munin-run postgres_size_ALL ), it works completely fine. Looking at /var/log/munin/munin-node.log, this is the output: Error output from postgres_size_ALL: DBI connect('dbname=template1','',...)failed: could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? at /usr/share/perl5/vendor_perl/Munin/Plugin/Pgsql.pm line 377 Service 'postgres_size_ALL exited with status 1/0. I am now out of ideas... the socket definitely exists, and pg_hba.conf is set to allow all users/databases from localhost with trust.

    Read the article

  • Serving only certain files from a directory to users on IIS7

    - by HarbingTarbl
    I'm have a need to show the most up to date version of a certain file in a directory to users who access a folder on my site (lets call this folder logs). I can't just move the file into the folder as another process relies on being able to find and edit this file while it is running. At first I had thought I could just create a folder on my site, give it the correct permissions and then create a symbolic link to the file. However it seems IIS7 does not follow symlinks. Another solution would be to create a phpscript that pulls the correct file and displays it, but that felt like over-engineering the solution. I know that on Apache this would be simple, but I can't figure out how to do it with IIS7. To give an idea of the folder structure I'm working with. The directory looks like this. Root --File I need to serve. --File containing plain text passwords. --Other folders/files. I can't move any of these files. If I just serve the entire directory using Virtual Directories in IIS I'll also be sharing files and folders containing configuration and other sensitive information.

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Using ZFS or XFS on a Xen guest running Linux

    - by zoot
    Background: I'm investigating the viability of using a filesystem other than ext3/4, with the ability to run snapshots for backup and rollback purposes. The servers under consideration are mailbox server nodes running on Linode's Xen based VPS platform. I'm particularly drawn to the various published benefits which ZFS offers in terms of data integrity and this year's stable release of native ZFS support in Linux - http://zfsonlinux.org ZFS appears to be the more thorough option in terms of benefits and simplicity (instead of LVM+XFS). Please note that I have little experience with ZFS (which I use on a local FreeNAS installation) and none with XFS, hence the post. To date, my servers are using ext3 filesystems, not managed under LVM. Question in detail: So, I have two questions. (1) Which of the two filesystems would be the better choice for the best of all of the following 3 aspects, running on a Xen Linux guest? Snapshots Data Integrity Performance (2) If ZFS is a viable option, is it practical to use ZRAID across Xen disk images to further enhance the solution for data integrity? Note: I'm reluctant to consider btrfs, given the many warnings I've read about in using it on production systems.

    Read the article

  • Leopard mail.app quoted-printable weirdness

    - by pehrs
    I am not sure if this is a bug in mail.app, or a configuration I just can't find. It might also be a strange sideffect of GPGmail. Mail.app correctly displays all e-mails on my IMAP server, except for the e-mails in my "Sent Messages" folder. In the sent messages folder it messes up åäö, in typical quoted-printable with wrong char-set fashion. They become ‰ˆ. When looking at the source of the e-mails it seems like the header generated by mail.app is correct: Message-Id: <> From: To: In-Reply-To: <> Content-Type: multipart/signed; protocol="application/pgp-signature"; micalg=pgp-sha1; boundary="Apple-Mail-4--741321197" X-Smtp-Server: smtp.example.com Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v936) Subject: Example subject Date: Fri, 26 Mar 2010 10:14:14 +0100 References: <> X-Pgp-Agent: GPGMail 1.2.0 (v56) This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --Apple-Mail-4--741321197 Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes Content-Transfer-Encoding: quoted-printable <Text here with =E5=E4=F6> --Apple-Mail-4--741321197 content-type: application/pgp-signature; x-mac-type=70674453; name=PGP.sig content-description: This is a digitally signed message part content-disposition: inline; filename=PGP.sig content-transfer-encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.12 (Darwin) iEYEARECAAYFAkus62kACgkQlIRLofxhDjYnnwCcDmCXuMGsKlh3a418s12coJgn 36sAoKMdkP3+g/OMK+Ps7AbjQq4Nbqzv =XMko -----END PGP SIGNATURE----- --Apple-Mail-4--741321197-- Thunderbird has no problem displaying the messages. So, how can I get mail.app to use the correct charset?

    Read the article

  • Bash script to run a clamscan on Ubuntu- how to use return values properly?

    - by Marius
    I'm trying to put together a simple script that will scan my home directory with clamscan and give me a warning if any viruses were found. What I have so far is: #! /usr/bin/env bash clamscan -l ~/.ClamScan/$(date +"%a%b%d") -ir /home RETVAL=$? [ $RETVAL -eq 0 ] && notify-send 'clamscan finished. No viruses found' [ $RETVAL -eq 1 ] && notify-send 'clamscan found a virus' && touch ~/Desktop/VirusFound [ $RETVAL -eq 2 ] && notify-send 'clamscan encountered errors. Check the logs' && touch ~/Desktop/ClamscanError find ~/.ClamScan/* -mtime +7 -exec rm {} \; However, I'm unsure about a couple of things: I'm always wary of using rm- as far as I can tell, the find command I've got should be deleting any log files that are more than a week old. I'm also not entirely sure how the return value testing works- I've got a manual that briefly covers bash, which says that the meaning of $? is "match one character", and I'm not entirely sure how that grabs the return value. Should I be using -eq or = for testing the return value? From what I can tell -eq tests strings and = tests numerals, but I'm not sure what the type of the return value is.

    Read the article

  • No digital audio output with Asus Xonar DG

    - by Lunatik
    I've purchased an Asus Xonar DG as replacement for faulty onboard audio in a Medion 8822 as it has an optical output which is all I really need to feed my HTPC. I uninstalled the previous drivers/devices, switched the PC off, inserted the Asus card, powered up, disabled the onboard audio in the BIOS, then installed the driver that came on the CD (same version as on Asus' website as of today) and everything went perfectly - no errors. I set the audio devices up in Windows and in the Asus utility (SPDIF enabled, 6-ch audio) as I would expect to see them work, but the only thing is I have no digital audio from test tones within Windows/the Asus utility, PCM audio or Dolby Digital from DVD. Analogue audio is fine. I've uninstalled things and reinstalled a couple of times now, as well as trying almost all combinations of analogue/digital outputs but can't get it sorted. Does anyone have any tips on how to get this working? This card has just been released so there isn't much out there to go on. Notes: The light on the toslink port is lit. OS is Vista 32-bit SP2 and all up to date, pretty much a fresh install with almost no 3rd party applications installed This page seems to suggest that a digital output device in Windows is not needed with Xonar cards as it was with the previous Realtek so I have it set to Analog. The only other output device is S/PDIF pass-thru

    Read the article

  • VMware Data Recovery error -3960 and Event ID 8193 on Windows Server 2003

    - by flooooo
    I've been trying to solve this problem since a few days now without any success. What I'm trying is to make a backup of a virtual machine running Windows Server 2003 SP 2 using VMware Data Recovery 2.0.0.1861. When starting the backup task it tries to make a snapshot of the virtual machine using VSS which fails with error: Event Type: Error Event Source: VSS Event Category: None Event ID: 8193 Date: 05.06.2012 Time: 12:12:01 User: N/A Computer: LEGOLAS Description: Volume Shadow Copy Service error: Unexpected error calling routine RegSaveKeyExW. hr = 0x800703f8. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 2d 20 43 6f 64 65 3a 20 - Code: 0008: 57 52 54 52 45 47 52 43 WRTREGRC 0010: 30 30 30 30 30 33 39 36 00000396 0018: 2d 20 43 61 6c 6c 3a 20 - Call: 0020: 57 52 54 52 45 47 52 43 WRTREGRC 0028: 30 30 30 30 30 33 31 38 00000318 0030: 2d 20 50 49 44 3a 20 20 - PID: 0038: 30 30 30 30 36 34 38 38 00006488 0040: 2d 20 54 49 44 3a 20 20 - TID: 0048: 30 30 30 30 34 33 38 34 00004384 0050: 2d 20 43 4d 44 3a 20 20 - CMD: 0058: 43 3a 5c 57 49 4e 44 4f C:\WINDO 0060: 57 53 5c 53 79 73 74 65 WS\Syste 0068: 6d 33 32 5c 76 73 73 76 m32\vssv 0070: 63 2e 65 78 65 20 20 20 c.exe 0078: 2d 20 55 73 65 72 3a 20 - User: 0080: 4e 54 20 41 55 54 48 4f NT AUTHO 0088: 52 49 54 59 5c 53 59 53 RITY\SYS 0090: 54 45 4d 20 20 20 20 20 TEM 0098: 2d 20 53 69 64 3a 20 20 - Sid: 00a0: 53 2d 31 2d 35 2d 31 38 S-1-5-18 This machine was converted p2v. I have no idea where to search for the problem and what to do. Google showed a few result but none of them were useful for me. Please help me. If you need further information I'll tell you - just ask!

    Read the article

  • What could cause a WMV to not play to completion in a browser?

    - by Ty W
    A realtor has had videos created for a community she is selling homes for, the people who made the videos gave them to us in WMV format. I can play these videos without any problem in Windows Media Player, VLC, and Quicktime (via Flip4Mac). I can play the videos from their location at videohomeguide.com in my browser without any trouble. However when I upload the files to our server the video stops at about the 1 minute mark in Safari and FireFox on Mac OS X Snow Leopard. I'm not sure if Windows browsers have the same issue because they are loaded using Windows Media Player. http://carolepaul.com/images/uploads/cottageslsjamestown.wmv <- our server, will fail at 1:09ish. http://www.videohomeguide.com/media/cottageslsjamestown.wmv <- should play to completion (3:27ish) The files generate the same MD5 hash on my desktop and on our server. I used WGET to transfer the files, always downloading from videohomeguide.com. Since the files are identical and are playable using VLC/WMP/Quicktime, and playable in the browsers from videohomeguide.com it seems to me that it is some sort of server config... maybe incorrect headers sent to the browsers? Here are the headers sent and received by FireFox on OS X: http://carolepaul.com/images/uploads/cottageslsjamestown.wmv GET /images/uploads/cottageslsjamestown.wmv HTTP/1.1 Host: carolepaul.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive HTTP/1.1 200 OK Date: Mon, 29 Mar 2010 20:43:20 GMT Server: Apache/1.3.41 (Unix) PHP/5.2.6 FrontPage/5.0.2.2635 mod_psoft_traffic/0.2 mod_ssl/2.8.31 OpenSSL/0.9.8b Last-Modified: Wed, 02 Dec 2009 18:08:46 GMT Etag: "1e7919c-198eadc-4b16ad2e" Accept-Ranges: bytes Content-Length: 26798812 Keep-Alive: timeout=10, max=200 Connection: Keep-Alive Content-Type: video/x-ms-wmv

    Read the article

  • Where to get working Sysinternals tools for Windows 2000?

    - by mihi
    Yes, I know Windows 2000 will run end-of-life in this year, but we still have a lot of Windows 2000 boxes we try to migrate but no idea if all of them can be migrated this year... Recently I downloaded a new Sysinternals Suite (most recent file date 2010-03-25) and noticed that some tools just do not work on Windows 2000 any longer, which makes troubleshooting a lot harder. I checked all the tools in the suite to check which tools do not work, and dug through to find older versions that do work, but I don't know if there are more recent ones (with fewer bugs) available. I did not find any way to download old versions from Sysinternals website. :-( So here is my list: Does not work Works ADExplorer.exe 1.30.0.0 ? Coreinfo.exe 2.00 ? disk2vhd.exe 1.5.0.0 ? livekd.exe 3.14 3.0 procdump.exe 1.72 ? Procmon.exe 2.8 (Frequent crashes) Filemon/Regmon 7.04 ShellRunas.exe 1.01 ? vmmap.exe 2.62 2.2 ZoomIt.exe 4.1 1.21 If you know of any more recent versions (preferrably with download links) that work on Windows 2000, or an official download link for older versions, it would be highly appreciated.

    Read the article

  • Resolving a BSOD/CPU/GPU issue...

    - by Christian Sciberras
    Hello all, I'm getting a BSOD / system crash (sometimes the PC just quits without a BSOD). Hardware Specifications cpu: i7 920 2666MHz / 8 cores (not OCed afaik) mobo: Asus P6T SE ram: 2x Corsair CM3X2G1333C9 (64bit DDR3 667MHz) gfx: ATI Radeon HD 5970 1GB (XFX HD5970 BE) os: Windows 7 Ultimate 64 bit (legit) All bios, firmware and drivers are all up to date (as of today). Symptoms Sometimes the PC runs smoothly, sometimes I get this BSOD. The BSOD always happens when I'm doing something related to graphics, such as viewing a video or playing a game. I get to know about the imminent BSOD ~10 seconds earlier; the PC starts freezing occasionally but increasing in frequency and length of lag (I noticed processor usage in creased from Process Monitor). I've tweaked BIOS settings occasionally but afaik, it was in vain. A day or so ago, I reset it to factory settings. BSOD contents The computer has rebooted from a bugcheck. The bugcheck was: 0x00000101 (0x0000000000000019, 0x0000000000000000, 0xfffff88001f35180, 0x0000000000000004). 15-12-2010 A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Internal Timer Error Processor ID: 4 23-12-2010 A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Internal Timer Error Processor ID: 2 Important The interesting thing is that although the event log (and BSOD screen) blame a "secondary processor", Windows Action Center sometimes blamed the GFX driver (for the same error). Also It is interesting to note that after hibernating my PC, I always get the BSOD.

    Read the article

  • Update git on mac

    - by Meltemi
    I can't remember how I installed git a while back....but now it's living in /usr/bin/git and needs to be updated. I don't care how (pre-compiled or build my own) but what I don't want is another version existing somewhere else. i vaguely remember curl(ing) down the source & compiling it. but not positive. anyway, what's the easiest way to keep Git up-to-date under Mac OS X? Side question: I'm not that familiar with git. once it's installed is it ENTIRELY contained within its directory? so, in my case, everything about git on my machine (excluding the actual code repositories of course) is in /usr/bin/git/ ? If so then can I just move git around with a simple mv -R /usr/bin/git /opt/git? Then update my $PATH and everything should work as before? if so then i supposed i could just install again by any method and to any directory...and then move the new one into /usr/bin replacing the old version?!? Or is this bad?

    Read the article

  • What's wrong with this HTTP POST request?

    - by bigboy
    I'm trying to fuzz a server using the Sulley fuzzing framework. I observe the following stream in Wireshark. The error talks about a problem with JSON parsing, however, when I try the same HTTP POST request using Google Chrome's Postman extension, it succeeds. Can anyone please explain what could be wrong about this HTTP POST request? The JSON seems valid. POST /restconf/config HTTP/1.1 Host: 127.0.0.1:8080 Accept: */* Content-Type: application/yang.data+json { "toaster:toaster" : { "toaster:toasterManufacturer" : "Geqq", "toaster:toasterModelNumber" : "asaxc", "toaster:toasterStatus" : "_." }} HTTP/1.1 400 Bad Request Server: Apache-Coyote/1.1 Content-Type: */* Transfer-Encoding: chunked Date: Sat, 07 Jun 2014 05:26:35 GMT Connection: close 152 <?xml version="1.0" encoding="UTF-8" standalone="no"?> <errors xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf"> <error> <error-type>protocol</error-type> <error-tag>malformed-message</error-tag> <error-message>Error parsing input: Root element of Json has to be Object</error-message> </error> </errors> 0

    Read the article

  • Cisco 7206 error trying to copy running-config (Bad file number)

    - by jasondewitt
    I have a cisco 7206 that terminates a bunch of pppoa sessions for dsl users. Today I noticed that if I tried to "show run" nothing happened. I mean that it doesn't show anything and just sends me right back to the command prompt. I decided I should probably try and back up the config and that is where I'm stuck. Any time I try to copy the running-config to tftp or to pcmcia card that I know is not full I get the following error: %Error opening system:/running-config (Bad file number) I get this error when I try to do anything with the running config. I've been googling around, but I haven't found any thing else that talks about this error. I've seen people say to erase the nvram and then try to "copy run start", but I don't want to erase the nvram until I can pull off a copy of the running-config. I would try to reboot it, but the startup-config that is on the nvram looks to be woefully out of date (good job me!). Any ideas what might be wrong? or how I can get the running config off the router?

    Read the article

  • How to stop Cron from sending messages about errors

    - by Beck
    I have this strange mails comming from cron: Return-Path: <[email protected]> Delivered-To: [email protected] Received: by domain.com (Postfix, from userid 0) id 6F944264D0; Mon, 10 Jan 2011 10:35:01 +0000 (UTC) From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@domain> lynx -dump http://www.domain.com/cron/realqueue Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <[email protected]> Date: Mon, 10 Jan 2011 10:35:01 +0000 (UTC) /bin/sh: lynx: not found I have this cron settings in crontab file: SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin */5 * * * * lynx -dump http://www.domain.com/cron/realqueue 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) Lynx is installed on my Ubuntu as well. Ofc in place of domain.com is my domain, just replaced. Thanks ;)

    Read the article

  • SQL Server Agent refuses to start

    - by Geo Ego
    I'm having a problem with SQL Server 2005 where the SQL Server Agent suddenly refuses to start. If I attempt to start it through Services, I get the error "SQL Server Agent (MSSQLSERVER) service on Local Computer started and then stopped." In the Application log, I have the following entry: Event Type: Error Event Source: SQLSERVERAGENT Event Category: Service Control Event ID: 103 Date: 5/20/2010 Time: 11:07:07 AM User: N/A Computer: SHAREPOINT Description: SQLServerAgent could not be started (reason: Unable to connect to server 'SHAREPOINT'; SQLServerAgent cannot start). This database has been running fine for four months. It contains a SharePoint configuration database, which two days ago stopped working, throwing me a message that the configuration database cannot be reached. It was then that I realized the SQL Server Agent was not running, and I have been unable to restart it. I have tried running it with both the local system account and the network service account, with the same results. So far, I have tried: Granting the administrators group, network service, and SharePoint SQL Server Agent account public and sysadmin roles on the database. Granting the administrators group, network service, and SharePoint SQL Server Agent account full permissions to the entire MSSQL directory and all files within. I still have no joy.

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >