Search Results

Search found 27016 results on 1081 pages for 'entry point'.

Page 648/1081 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • server 2008 r2 - wbadmin systemstatebackup - system writer not found in the backup

    - by TWood
    I am trying to manually run a systemstatebackup command on my server 2008 r2 box and I am getting an error code '2155347997' when I view the backup event log details. The command line tells me that I have log files written to the c:\windows\logs\windowsserverbackup\ path but I have no files of the .log type there. My command window tells me "System Writer is not found in the backup". However when I run vssadmin list writers I find System Writer in the list and it shows normal status with no last errors stored. I am running this from an elevated command prompt as well as from a logged on administrator account. My backup target path has permission for network service to have full control and it has plenty of free space. Looking in eventlog I have two VSS error 8194 that happen immediately before the Backup error 517 which has the errorcode 2155347997 listed. All three of these errors are a result of trying to run the command for the systemstatebackup. It's my belief that some VSS related permission is failing and exiting the backup process before it ever gets started. Because of this the initial code that creates the log files must not be running and this is why I have no files. When running the systemstatebackup command from the command prompt and watching the windowsserverbackup directory I do see that I have a Wbadmin.0.etl file which gets created but it is deleted when the backup errors out and stops. I have looked online and there are numerous opinions as to the cause of this error. These are the things I have corrected to try and fix this issue before posting here: Machine runs a HP 1410i smart array controller but at one time also used a LSI scsi card. Used networkadminkb.com's kb# a467 to find one LSI_SCSI entry in HKLMSysCurrentControlSetServices which start was set to 0x0 and I modified to 0x3. No changes. In HKLMSystemCurrentControlSetServicesVSSDiag I gave network service full control where it previously only had "Special Permission". No changes. I followed KB2009272 to manually try to fix system writer. These are all of the things I have tried. What else should I look at to resolve this issue? It may be important to note that I run Mozy Pro on this server and that was known in the past to use VSS for copying operations and it occasionally threw an error. However since an update last year those error event log entries have stopped.

    Read the article

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

  • Hybrid gmail MX + postfix for local accounts

    - by krunk
    Here's the setup: We have a domain, mydomain.com. Everything is on our own server, except general email accounts which are through gmail. Currently gmail is set as the MX record. The server also has various email aliases it needs to support for bug trackers and such. e.g. [email protected] |/path/to/issuetracker.script I'm struggling with a setup that allows the following, both locally and from user's email clients. guser1 - has a gmail account and a local account guser2 - only has a gmail account bugs - has a pipe alias in /etc/aliases for issue tracker Scenarios mail to [email protected] from local host (crons and such) needs to go to gmail account mail to [email protected] from local host mail to [email protected] needs to be piped to the local issue tracker script So, the first stab was creating a transport map. In this scenario, the our server would be set as teh MX and guser* destined emails are sent to gmail. Put the gmail users in a map like so: [email protected] smtp:gmailsmtp:25 [email protected] smtp:gmailsmtp:25 Problems: Ignores extensions such as [email protected] Only works if append_at_myorigin = no (if set to yes, gmail refuses to connect with: E4C7E3E09BA3: to=, relay=none, delay=0.05, delays=0.02/0.01/0.02/0, dsn=4.4.1, status=deferred (connect to gmail-smtp-in.l.google.com[209.85.222.57]:25: Connection refused)) since append_at_myorigin is set to no, all received emails have (unknown sender) The second stab was to set explicit localhost aliases in /etc/aliases and do a domain wide forward on mydomain. This too requires setting the local server as the MX: root: root@localhost # transport mydomain.com smtp:gmailsmtp:25 Problems: * If I create a transport map for a domain that matches "$myhostname", the aliases file is never parsed. So when a local user (or daemon) sends an email like: mail -s "testing" root < text.txt Postfix ignores the /etc/alias entry and maps to [email protected] and attempts to send it to the gmail transport mapping. Third stab: Create a subdomain for the bugs, something like bugs.mydomain.com. Set the MX for this domain to local server and leave the MX for mydomain.com to the Gmail server. Problems: * Does not solve the issue with local accounts. So when the bug tracker responds to an email from [email protected], it uses a local transport and the user never receives the email. % postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_at_myorigin = no append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = $myhostname, localhost.$myhostname, localhost myhostname = mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_cert_file = /etc/ssl/certs/kspace.pem smtp_tls_enforce_peername = no smtp_tls_key_file = /etc/ssl/certs/kspace.pem smtp_tls_note_starttls_offer = yes smtp_tls_scert_verifydepth = 5 smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_recipient_restrictions = permit_mynetworks, reject_invalid_hostname, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_destination smtpd_tls_ask_ccert = yes smtpd_tls_req_ccert = no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport

    Read the article

  • Setting font size of Closed Captions on iPhone using ffmpeg or mencoder

    - by forthrin
    Does anyone know how to either: Make ffmpeg set subtitle font size in the output video file Make mencoder produce an iPhone-compatible video file (with subtitles) I finally found out how to get Closed Captions video on iPhone, with mkv and srt files as source material. The secret was using the mov_text subtitle codec in ffmpeg (and turning on Closed Captions in the iPhone settings of course): ffmpeg -y -i in.mkv -i in.srt -map 0:0 -map 0:1 -map 1:0 -vcodec copy -acodec aac -ab 256k -scodec mov_text -strict -2 -metadata title="Title" -metadata:s:s:0 language=eng out.mp4 However, the font size appears very small on the iPhone, and I can't find out how to set it with ffmpeg (the iPhone has no option for this). I found out that mencoder has a -subfont-text-scale option, but I don't have a lot of experience with this program. The following, my best attempt so far, produces an output file which is not playable on the iPhone. sudo port install mplayer +mencoder_extras +osd mencoder in.mkv -sub in.srt -o out.mp4 -ovc copy -oac faac -faacopts br=256:mpeg=4:object=2 -channels 2 -srate 48000 -subfont-text-scale 10 -of lavf -lavfopts format=mp4 PS! As requested, here is the output from mencoder: 192 audio & 400 video codecs success: format: 0 data: 0x0 - 0xb64b9d2f libavformat version 54.6.101 (internal) libavformat file format detected. [matroska,webm @ 0x1015c9a50]Unknown entry 0x80 [lavf] stream 0: video (h264), -vid 0 [lavf] stream 1: audio (ac3), -aid 0, -alang eng VIDEO: [H264] 1280x544 0bpp 49.894 fps 0.0 kbps ( 0.0 kbyte/s) [V] filefmt:44 fourcc:0x34363248 size:1280x544 fps:49.894 ftime:=0.0200 ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.23.100 (internal) AUDIO: 48000 Hz, 2 ch, s16le, 448.0 kbit/29.17% (ratio: 56000->192000) Selected audio codec: [ffac3] afm: ffmpeg (FFmpeg AC-3) ========================================================================== ** MUXER_LAVF ***************************************************************** REMEMBER: MEncoder's libavformat muxing is presently broken and can generate INCORRECT files in the presence of B-frames. Moreover, due to bugs MPlayer will play these INCORRECT files as if nothing were wrong! ******************************************************************************* OK, exit. videocodec: framecopy (1280x544 0bpp fourcc=34363248) VIDEO CODEC ID: 28 AUDIO CODEC ID: 15002, TAG: 0 Writing header... [mp4 @ 0x1015c9a50]Codec for stream 0 does not use global headers but container format requires global headers [mp4 @ 0x1015c9a50]Codec for stream 1 does not use global headers but container format requires global headers Then the following repeats itself for every frame: Pos: 0.0s 1f ( 2%) 0.00fps Trem: 0min 0mb A-V:0.000 [0:0] [mp4 @ 0x1015c9a50]malformated aac bitstream, use -absf aac_adtstoasc Error while writing frame. I recognize -absf aac_adtstoasc as an ffmpeg option (does mencoder spawn ffmpeg?), but I don't know how to pass this option on (my hunch is this is not even the origin of the problem).

    Read the article

  • Problem with USB drivers (Windows-XP)

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • Can't connect to svnserve on localhost - connection actively refused

    - by RMorrisey
    When I try to connect using Tortoise to my SVN server using: svn://localhost/ Tortoise tells me: "Can't connect to host 'localhost'. No connection could be made because the target machine actively refused it." How can I fix this? I am trying to set up a subversion server on my local PC for personal use. I am running Windows Vista, with SlikSVN and TortoiseSVN installed. I previously had everything working correctly, but I found that I couldn't merge(!), apparently due to a version mismatch between the SVN client and server. Anyway... I now have the following setup: I created a repository using svnadmin create; it resides at C:\svnGrove C:\svnGrove\conf\svnserve.conf (# comments omitted): [general] anon-access=read auth-access=write password-db=passwd #authz-db=authz realm=svnGrove C:\svnGrove\conf\passwd: [users] myname=mypass My Subversion Server service is pointed to: C:\Program Files\SlikSvn\bin\svnserve.exe --service -r C:\svnGrove It shows the TCP/IP service as a dependency. I have also tried running svnserve from the command line, with similar results. The below is provided by the 'about' option in TortoiseSVN: TortoiseSVN 1.6.10, Build 19898 - 32 Bit , 2010/07/16 15:46:08 Subversion 1.6.12, apr 1.3.8 apr-utils 1.3.9 neon 0.29.3 OpenSSL 0.9.8o 01 Jun 2010 zlib 1.2.3 The following is from svn --version on the command line (not sure why it says CollabNet, CollabNet was the previous SVN binary that I had set up. The uninstaller failed to remove everything gracefully): svn, version 1.6.12 (SlikSvn/1.6.12) WIN32 compiled Jun 22 2010, 20:45:29 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme I disabled my Windows Firewall and CA Internet Security, without success in resolving the issue. Edit The old version of svnserve was still set up as a service after the uninstall, pointed to this path: C:\Program Files\Subversion\svn-win32-1.4.6\bin I edited the registry key for the service to point to the new path (shown above). Whether I run svnserve as a service, or using -d, I do not see an entry for that port number in the listing generated by netstat -anp tcp.

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make the device active again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question. Edit: My /etc/mdadm/mdadm.conf looks like this. I've never touched this file, at least by hand. # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR <my mail address> # definitions of existing MD arrays # This file was auto-generated on Wed, 27 Jan 2010 17:14:36 +0200 In /proc/partitions the last entry is md_d0 at least now, after reboot, when the device happens to be active again. (I'm not sure if it would be the same when it's inactive.) Resolution: as Jimmy Hedman suggested, I took the output of mdadm --examine --scan: ARRAY /dev/md0 level=raid1 num-devices=2 UUID=de8fbd92[...] and added it in /etc/mdadm/mdadm.conf, which seems to have fixed the main problem. After changing /etc/fstab to use /dev/md0 again (instead of /dev/md_d0), the RAID device also gets automatically mounted!

    Read the article

  • 10.6.4 Apple Wiki: New just created users can do nothing?

    - by beefon
    Hello, After update to 10.6.4 there's an issue: any new users that I create in Server Prefs/WGM can't post to their blogs, comment, create wiki pages... They can't do anything! There's log from Wiki errors (when user DURAK tries to create new blog entry): [HTTPChannel,5,127.0.0.1] Traceback (most recent call last): [HTTPChannel,5,127.0.0.1] File "/usr/share/caldavd/lib/python/twisted/web/server.py", line 126, in process self.render(resrc) [HTTPChannel,5,127.0.0.1] File "/usr/share/caldavd/lib/python/twisted/web/server.py", line 133, in render body = resrc.render(self) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_xmlrpc_server/WebAppServer.py", line 90, in render d = defer.maybeDeferred(function, *args) [HTTPChannel,5,127.0.0.1] File "/usr/share/caldavd/lib/python/twisted/internet/defer.py", line 104, in maybeDeferred result = f(*args, **kw) [HTTPChannel,5,127.0.0.1] --- <exception caught here> --- [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_xmlrpc_server/ContentServiceBase.py", line 121, in xmlrpc_addEntry aPage = ContentEntry.newBundleBasedContentEntry (path = path, content = content, author = author, title = title, uid = uid, type = kind, versioned = self.versioned, templateName = template) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_wlt/ContentEntry.py", line 794, in newBundleBasedContentEntry aPage.save('First created', 'created') [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_wlt/ContentEntry.py", line 445, in save revisions.addRevision(self.serializeEntry(revisionAttributes), inComment = comment, inAuthor = updateAuthor, inChangeType = editType) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_utilities/sqlitersion.py", line 36, in _func result = f(self, *args, **kwargs) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_utilities/sqlitersion.py", line 49, in addRevision contentPlistStr = plistlib.writePlistToString(inContentDict).decode("utf-8") [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 110, in writePlistToString [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 94, in writePlist [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 251, in writeValue [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 280, in writeDict [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 238, in writeValue [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 171, in simpleElement [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 221, in _escapeAndEncode [HTTPChannel,5,127.0.0.1] exceptions.UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128) [HTTPChannel,5,127.0.0.1] 'Unparseable html in page, removing whatever was already written.' [HTTPChannel,5,127.0.0.1] Removing /Library/Collaboration/Users/durak/weblog/27133.page Any "old" user CAN create, modify, comment, etc. What can you recommend to fix this issue? Hope for your help...

    Read the article

  • resolve access violation exception (0xC0000005) crashing IIS app pool

    - by Joseph
    IIS 7.5, server 2008 r2, classic asp and asp .net 2.0, 3.5 website same server, same app pool. The past 4 weeks thousands of these errors 'C0000005' are occurring. I know from IIS debug diag tool that 'C0000005' is an access violation error. Below is the top line from my debug diag report. In w3wp__PID__6656__Date__01_08_2011__Time_01_42_46AM__281__First Chance Access Violation.dmp the assembly instruction at asp!CActiveScriptEngine::GetApplication+27 in \\?\C:\Windows\System32\inetsrv\asp.dll from Microsoft Corporation has caused an access violation exception (0xC0000005) when trying to read from memory location 0x00000000 on thread 29 Thread 29 - System ID 6736 Entry point 0x00000000 Create time 1/8/2011 12:46:26 AM Time spent in user mode 0 Days 00:00:00.140 Time spent in kernel mode 0 Days 00:00:00.078 Function Source asp!CActiveScriptEngine::GetApplication+27 vbscript!COleScript::GetDebugApplicationCoreNoRef+2b vbscript!COleScript::FDebuggerEnabled+30 vbscript!COleScript::SetScriptSite+cd asp!CActiveScriptEngine::Init+125 asp!CScriptManager::GetEngine+252 asp!AllocAndLoadEngines+28f asp!ExecuteGlobal+17a asp!Execute+b5 asp!CHitObj::ViperAsyncCallback+3fc asp!CViperAsyncRequest::OnCall+6a comsvcs!CSTAActivityWork::STAActivityWorkHelper+32 ole32!EnterForCallback+f4 ole32!SwitchForCallback+1a8 ole32!PerformCallback+a3 ole32!CObjectContext::InternalContextCallback+15b ole32!CObjectContext::DoCallback+1c comsvcs!CSTAActivityWork::DoWork+12f comsvcs!CSTAThread::DoWork+18 comsvcs!CSTAThread::ProcessQueueWork+37 comsvcs!CSTAThread::WorkerLoop+135 msvcrt!_endthreadex+44 msvcrt!_endthreadex+ce kernel32!BaseThreadInitThunk+e ntdll!__RtlUserThreadStart+70 ntdll!_RtlUserThreadStart+1b BELOW is the faulting module. ASP report Executing ASP requests 0 Request(s) ASP templates cached 0 Template(s) ASP template cache size 0.00 Bytes Loaded ASP applications 1 Application(s) ASP.DLL Version 7.5.7600.16620 ASP application report ASP application metabase key Physical Path Virtual Root Session Count 0 Session(s) Request Count 0 Request(s) Session Timeout 0 minutes(s) Path to Global.asa Server side script debugging enabled False Client side script debugging enabled False Out of process COM servers allowed False Session state turned on False Write buffering turned on False Application restart enabled False Parent paths enabled False ASP Script error messages will be sent to browser False ASP!CACTIVESCRIPTENGINE::GETAPPLICATION+27In w3wp__PID__6656__Date__01_08_2011__Time_01_42_46AM__281__First Chance Access Violation.dmp the assembly instruction at asp!CActiveScriptEngine::GetApplication+27 in \\?\C:\Windows\System32\inetsrv\asp.dll from Microsoft Corporation has caused an access violation exception (0xC0000005) when trying to read from memory location 0x00000000 on thread 29 recent events: server was being brute forced by hackers all of Dec and probably earlier, they weren't able to gain access but did get a virus on and blasted out spam. insatlled AVG and about the 17 or 22 latest patches. after that the app pool started crashing and the server has crashed a couple times since then. I am in no mans land as I am a developer and not a sys admin but I have to assume many roles. So I'm reaching out for help. Sometimes I will see hundreds of these 'C0000005' scriptengine errors in the event log in a matter of seconds and other times just a few times an hour. I googled this line 'ASP!CACTIVESCRIPTENGINE::GETAPPLICATION' and got nothing. Its like the function don't exist or something. I have spent many hours google-ing to no avail and am now turning to the experts on the forums. Thank you for your help

    Read the article

  • Multicast hostname lookups on OSX

    - by KARASZI István
    I have a problem with hostname lookups on my OSX computer. According to Apple's HK3473 document it says for v10.6: Host names that contain only one label in addition to local, for example "My-Computer.local", are resolved using Multicast DNS (Bonjour) by default. Host names that contain two or more labels in addition to local, for example "server.domain.local", are resolved using a DNS server by default. Which is not true as my testing. If I try to open a connection on my local computer to a remote port: telnet example.domain.local 22 then it will lookup the IP address with multicast DNS next to the A and AAAA lookups. This causes a two seconds lookup timeout on every lookup. Which is a lot! When I try with IPv4 only then it won't use the multicast queries to fetch the remote address just the simple A queries. telnet -4 example.domain.local 22 When I try with IPv6 only: telnet -6 example.domain.local 22 then it will lookup with multicast DNS and AAAA again, and the 2 seconds timeout delay occurs again. I've tried to create a resolver entry to my /etc/resolver/domain.local, and /etc/resolver/local.1, but none of them was working. Is there any way to disable this multicast lookups for the "two or more label addition to local" domains, or simply disable it for the selected subdomain (domain.local)? Thank you! Update #1 Thanks @mralexgray for the scutil --dns command, now I can see my domain in the list, but it's late in the order: DNS configuration resolver #1 domain : adverticum.lan nameserver[0] : 192.168.1.1 order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 resolver #3 domain : 254.169.in-addr.arpa options : mdns timeout : 2 order : 300200 resolver #4 domain : 8.e.f.ip6.arpa options : mdns timeout : 2 order : 300400 resolver #5 domain : 9.e.f.ip6.arpa options : mdns timeout : 2 order : 300600 resolver #6 domain : a.e.f.ip6.arpa options : mdns timeout : 2 order : 300800 resolver #7 domain : b.e.f.ip6.arpa options : mdns timeout : 2 order : 301000 resolver #8 domain : domain.local nameserver[0] : 192.168.1.1 order : 200001 Maybe it would work if I could move the resolver #8 to the position #2. Update #2 No probably won't work because the local DNS server on 192.168.1.1 answering for domain.local requests and it's before the mDNS (resolver #2). Update #3 I could decrease the mDNS timeout in /System/Library/SystemConfiguration/IPMonitor.bundle/Contents/Info.plist file, which speeds up the lookups a little, but this is not the solution.

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    Hi, A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • How can I change how OS X's 'say' command pronounces a word?

    - by jwhitlock
    OS X's say command is useful for some tasks (such as Skype's 'notify me when a contact comes online), but it is pronouncing some names incorrectly. Is there a way to teach say to pronounce a word differently? For example, try: say "Hi, Joel Spolsky" The 'ol' sounds like 'ball' rather than 'old'. I'd like to add an exception that say "Pronounce Spolsky like this", rather than try to teach new linguistic rules. I bet there is a way since it can pronounce "iphone" as Apple wants. Update - After some research, here's what I've learned: Text-to-speech is split between turning the text to phonemes, and then the phonemes are turned into audio using a voice. Changing the voice doesn't effect the phonemes. The Speech Synthesis Manager has some functions for turning text to phonemes, and a method for registering a speech dictionary that will add new text-phoneme maps. However, Apple's speech dictionary must be in a binary form - I didn't find any plist XML. Using dtrace while running say, I found some interesting files opened in /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources. This is probably the speech dictionary, but they are all binary, except for Homophones, which is XML. Adding entries to Homophones does nothing - it is probably used in speech-to-text. They are also code signed by Apple - changing them may prevent some programs from working. PrefixDictionary CartNames CartLite SymbolDictionary Homophones There are ways to add text versions of application interface elements so VoiceOver works, a lot of which a developer gets for free, but there are tricky bits. The standard here appears to be to use a phonetic spelling as needed. My guesses are: say is a light layer of code on top of the Speech Synthesis Manager. It would be easy for the Apple devs to add a command line option to take the path to a speech dictionary plist for alternate phoneme mapping, but they didn't. It may be a useful open-source project to write a better say. Skype probably uses Speech Synthesis Manager directly, leaving no hooks to change the way my friend's names are pronounced, other than spelling them phonetically, which is silly. The easiest way to make a command line version of say is how JRobert suggested. Here's my quick implementation, using Doug Harris's spelling suggestion: #!/bin/sh echo $@ | tr '[A-Z]' '[a-z]' | sed "s/spolsky/spowlsky/g" | /usr/bin/say Finally, some fun command line stuff: # Apple is weird sqlite3 /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources/Tuples .dump # Get too much information about what files are being opened sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' # Just fun say -v bad "Joel Spolsky Spolsky Spolsky Spolsky Spolsky, Joel Spolsky Spolsky Spolsky Spolsky Spolsky" echo "scale=1000; 4*a(1)" | bc -l | say

    Read the article

  • Mysql innoDB corruption after server crash

    - by Ward Loockx
    Yesterday my server died because an outage in the data center. Today it's back up, but having some problems with mysql. First of all my mysql server was not able to start. For this reason I deleted the files ib_logfile0 and ib_logfile1 in /var/lib/mysql folder (I still have the old failing files). After this my server was able to startup again. But now I see a lot of issues in the mysql log file. Sep 1 09:43:55 * mysqld: 120901 9:43:55 InnoDB: Error: page 70944 log sequence number 8 1483471899 Sep 1 09:43:55 * mysqld: InnoDB: is in the future! Current system log sequence number 5 612394935. Sep 1 09:43:55 * mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 09:43:55 * mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 09:43:55 * mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html When I check the docs on mysql.com, I found that I need to recover my database with backups. I have a backup but not sure what's the good way on importing it. Or is there a way to recover without having to re-import the database again? So if I'm correct I need to put innodb_force_recovery to 4 in mysql and delete all current data and re-import? Is there a way to do this without having downtime? I also have one slave running. This slave has the current status now: Last_Error: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave. How can I totally reset the slave after the new import on the master has happend? Hopefully we can find a solution without not to much downtime. Thanks!

    Read the article

  • ISPconfig3 + CentOS 6.2 , confused on how to move forward after initial install?

    - by Damainman
    I installed ISPCONFIG3 on centos 6.2 using the great guide on howtoforge.com. Everything is up and running and I can access ISPCONFIG via a browser. However I am not sure how to move forward with the initial setup so I can setup the very first account and get my website live. Details: Only have 1 server, the centos+ispconfig is running on a virtual machine of XEN XCP. I setup the server name to be server1.mydomain.com. I only have 2 usable ips. I plan to use them as follows: xx.xx.xx.01 : For my website and the websites of all accounts I add. xx.xx.xx.02 : For ns1.mydomain.com and ns2.mydomain.com (Yea I know they should be different ips at different locations, but this is what I have to work with at the moment.... ) I registered the nameservers at my registrar with the .02 ip. I want to use bind and ISPconfig to run the DNS on my server itself and not via my registrar. Right now if I go to the .01 IP it shows the centos+apache successful install page. So to break it down basically I am not sure where to start when it comes to: (What to consider and what to do to setup the first domain on the server) Telling bind to use the name server domains with .02. Setting up my First website(which will be my main website) in ISPconfig so mydomain.com resolves properly to my server. Make it so when you go to the .01 IP, it either redirects or shows the contents of my main website. (If this can't be done, then any advice is appreciated) Making sure that when I add a new domain, it automatically puts in the proper information for the domain so it points to the right mail, database, dns, entry. If I overlooked a tutorial then please feel free to let me know, and any advice would be greatly appreciated. Some of the tutorials I found were not specific to doing everything on only one server with Centos+Apache+Bind. Right now all I did was install centos and install ISPconfig3. Trying to move forward correctly so I don't mess up everything I did by not knowing what to do. Thank you in advance!!

    Read the article

  • HTTPS in sub domain redirects to main domain

    - by Amitabh
    We recently bought a wildcard certificate and installed it for a domain. It works fine for the main domain but seems to not work at all for any sub domains. Whats happening is we can access the sub domains fine on HTTP, but whenever we try HTTPS for the same sub domain url we are redirected back to the main domain. So if I put up a test folder "httpstest" in a sub domain with a index.html file in it, the following happens mysubdomain.mywebsite.com/httpstest/index.html or mysubdomain.mywebsite.com/httpstest/ works perfectly fine with http:// but mysubdomain.mywebsite.com/httpstest/ or mysubdomain.mywebsite.com/httpstest/index.html does not work with https:// and redirects to the main domain.Any help on this is greatly appreciated. The site is not the main site used for setting up the VPS. It was added from WHM. Environment: We are on a Linux VPS. Cpanel 11.30.6 , Apache 2.2.22, PHP 5.3.13 The Virtualhost entry looks like: <VirtualHost xx.xx.xxx.xx:443> ServerName my-own-website.com ServerAlias www.my-own-website.com DocumentRoot /home/amitabh/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/my-own-website.com combined CustomLog /usr/local/apache/domlogs/my-own-website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User amitabh # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup amitabh amitabh </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup amitabh amitabh </IfModule> ScriptAlias /cgi-bin/ /home/amitabh/public_html/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/my-own-website.com.crt SSLCertificateKeyFile /etc/ssl/private/my-own-website.com.key SSLCACertificateFile /etc/ssl/certs/my-own-website.com.cabundle CustomLog /usr/local/apache/domlogs/my-own-website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/amitabh/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/ssl/2/amitabh/my-own-website.com/*.conf" </VirtualHost>` I guess I messed up the formatting big time. Any help on formatting and on the issue is great appreciated. Thank you. Update: I could not update the formatting here. I posted the same question in a linux forum . I will really appreciate any pointer on it.

    Read the article

  • Windows Server 2008 R2 loses ability to connect to network share

    - by JamesB
    I could sure use some help with this one: I've got two Windows Server 2008 R2 x64 Terminal Servers, as well as several 2003 servers (DNS / Wins / AD / DC). On the two 2008 boxes, every now and then they will get in this mode where you can't map a drive to a random server. I say random server because it's not always the same server that you can't map to. Here is a summary of what I can and can't do: net view \\servername Sometimes this works, sometimes it does not. net view \\FQDN This always works. net view \\IPAddress This always works. ping servername Sometimes this works, sometimes it does not. ping FQDN This always works. ping IPAddress This always works. I've been looking all over for a solution to this. It sure seems like Microsoft would have a hotfix by now. The kicker to this is that it sometimes works great, especially after a reboot. It may run for 2 weeks just fine, but all of a sudden it will fail to resolve the remote server name. It will then be this way for a few days, then it might start working again. Also, while it's in the mode of not working, the other servers have no problem getting there. It's just these 2008 R2 Terminal Servers. Setting a static entry in the Hosts file and LMHosts does not make it work. All servers have static IPs and they are registered in DNS and Wins just fine. Here is a long thread on MS Technet of the exact same problem, but they don't have a good solution. Here is their workaround (It was from June of 2010): Good news - a hotfix is in the works and a workaround has been identified: Root cause is that since this is SMB1 all user sessions are on a single TCP connection to the remote server. The first user to initiate a connection to the remote SMB server has their logon-ID added to the structure defining the connection. If that user logs off all subsequent uses of that TCP session fail as the logon-id is no longer valid. As a workaround for now to keep the issue from happening you will want to have the user not logoff the Terminal Server only disconnect their sessions. Any word from anyone out there about a solution? Any help would sure be appreciated. Thanks, James

    Read the article

  • Complete machine freezes...at a loss

    - by user28818
    Guys, We built around 12 machines a few months ago to run Ubuntu. They each have the following specs: ASUS Z8NA-D6 motherboard Dual quad core Intel(R) Xeon(R) CPU E5520 @ 2.27GHz OCZ Mod Extreme Pro 500W power supply 12 GB Kingston RAM Nvidia GeForce 9800 GT graphics card My machine ran well for awhile. However, it started experiencing random lockups. These lockups are not X lockups, they are complete system freezes. The nic stops responding, the magic sysrq buttons won't work. The machine is dead. I first suspected RAM. Memtest86 didn't find anything, but I replaced the RAM anyway. Still, lockups. So I replaced the graphics card. Still, more lockups. They became more and more frequent and started to happen 2-3 times a day. So I replaced the motherboard and power supply in one fell swoop. Suddenly, no more lockups! Woohoo! Except, a week later, in the morning, the machine wouldn't wake up. I reset it, started it up, and the log files showed the last entry at around 11 pm the evening before. This has started occurring with more frequency...now just about every morning I come in, the machine is locked up, and has been since the night before. Yesterday, in the 3 weeks since I replaced the motherboard and power supply, the machine actually locked up on in in mid-work. This is the first time since replacing the two (MB and PS) that this happened while I was using it. All others occurred while I was away. I'm at a loss. Nothing is in syslog or message that would indicate a problem around the time of the lockup. Temps are good...I use lmsensors to monitor and have a script that writes the output to file every minute. They never get that high. The only thing I haven't replaced at this point is the case and the harddrives. I doubt either could be the cause. What would you do if you were in my shoes? Is there a troubleshooting approach I'm missing? For the record, all of the other machines, all eleven of them, don't have any problems. They're all running the same version of Ubuntu (Lucid) that I am. Thanks!

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • How to configure sendmail to relay through a specific server

    - by ErebusBat
    I have a tiny home server setup behind my cable modem (bresnan communications). I want to be able for this box to send out email (not receive) for notifications and whatnot. What I have already done: I have installed and configured sendmail. I have added mail.bresnan.net as my SMART_HOST directive. What I belive the problem is When I attempt to send an email I get the following in my mail log: Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: from=aburns, size=140, class=0, nrcpts=1, msgid=<[email protected]>, relay=aburns@localhost Dec 22 10:24:17 batcave sm-mta[1531]: oBMHOHWZ001531: from=<[email protected]>, size=397, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: to=<[email protected]>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30140, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBMHOHWZ001531 Message accepted for delivery) Dec 22 10:24:18 batcave sm-mta[1517]: oBMH9mVv001357: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:14:30, xdelay=00:00:42, mailer=relay, pri=300339, relay=pmx0.bresnan.net. [69.145.248.1], dsn=4.0.0, stat=Deferred: Connection timed out with pmx0.bresnan.net. You can see where the message is accepted for delivery by my sendmail server, then where it attempts to hand off to bresnan's server and it timesout. This is where my question is. Asstute readers will notice that pmx0.bresnan.net is not what I have my SMART_HOST directive set as. This is the (outside?) MX server for the bresnan.com/net domain. Apparently bresnan has their network configured so that you can not access this server from within their own network and instead must use the mail.bresnan.net server (which I can connect to). The problem is that I don't know how to tell sendmail to use this server and not the domain. What I have tried Setting a hosts entry so that the pmx0 server points to the mail IP address. This doesn't work, which makes sense as sendmail is obviously doing an MX query to find the server which returns the IP so there is never a need to do a 'normal' DNS resolve so the hosts file never gets involved.

    Read the article

  • Have a server, need to figure out a method of backup

    - by PolishHurricane
    My company has an older Dell 2650 server running ArchLinux x64: http://www.dell.com/downloads/global/products/pedge/en/2650_specs.pdf (2 x 2.4GHz Intel Xeon w/around 3287 RAM according to "free -m") We use it to host our internal company site and to post some information from our orders to and we'd like the ability to keep it up as much as possible. What we require: - It needs to always be functional from 8am to 4pm for our data entry person to use it and others to do other things required on it. - If it goes down, we need a quick way to get the machine running again. - If it goes down, we would like to have the data backed up. Some of the major problems include: - The servers old and it may have memory issues - We don't know when one of the hard drives could fail - Our power goes out here once in a while We have a battery backup, but that's pretty much it and it's not for long term. If the server does go down, we have another system in place to store order information that comes in while it's down and repost it when it's back, but we need it up during the day. So we're wondering, what should we get for options? These are the things we thought of, sort of: Setup RAID 1, but that would involve wiping everything right? If we do that, how would we transfer the data over without messing up the server? We could buy an extra server or 2 off eBay for $100, the same model, is that practical or should we get something else? Should we buy a PC or another better server and host off that because it is if anything easier to exchange parts? Should we keep extra parts handy incase it implodes? Should we buy/use backup software? We hear drobo's are cool, but suck. Perhaps there is a software solution to this problem that backs up to another machine or gets us up and running again quickly. Also, if we are to purchase hardware, what is decent? Does anybody know of one for ArchLinux/Linux? We both know a ton about computers but we're kind of unsure what step to take with this, especially with this type of server. Thanks

    Read the article

  • Adding local users / passwords on Kerberized Linux box

    - by Brian
    Right now if I try to add a non-system user not in the university's Kerberos realm I am prompted for a Kerberos password anyway. Obviously there is no password to be entered, so I just press enter and see: passwd: Authentication token manipulation error passwd: password unchanged Typing passwd newuser has the same issue with the same message. I tried using pwconv in the hopes that only a shadow entry was needed, but it changed nothing. I want to be able to add a local user not in the realm and give them a local password without being bothered about Kerberos. I am on Ubuntu 10.04. Here are my /etc/pam.d/common-* files (the defaults that Ubuntu's pam-auth-update package generates): account # here are the per-package modules (the "Primary" block) account [success=1 new_authtok_reqd=done default=ignore] pam_unix.so # here's the fallback if no module succeeds account requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around account required pam_permit.so # and here are more per-package modules (the "Additional" block) account required pam_krb5.so minimum_uid=1000 # end of pam-auth-update config auth # here are the per-package modules (the "Primary" block) auth [success=2 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=1 default=ignore] pam_unix.so nullok_secure try_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) # end of pam-auth-update config password # here are the per-package modules (the "Primary" block) password requisite pam_krb5.so minimum_uid=1000 password [success=1 default=ignore] pam_unix.so obscure use_authtok try_first_pass sha512 # here's the fallback if no module succeeds password requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around password required pam_permit.so # and here are more per-package modules (the "Additional" block) # end of pam-auth-update config session # here are the per-package modules (the "Primary" block) session [default=1] pam_permit.so # here's the fallback if no module succeeds session requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around session required pam_permit.so # and here are more per-package modules (the "Additional" block) session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so # end of pam-auth-update config

    Read the article

  • IE9 Error: There was a pr?blem sending the command to the program

    - by HK1
    I'm working on a new/fresh Windows 7 32bit machine that now has IE9 installed. The user is using the Dell Stardock application as his primary "desktop" (all his links there). When we place an internet link there and click on it we get the following error message: There was a problem sending the command to the program. To me this indicates that IE9 is having trouble going to the website we want to go to, which should get passed as a parameter to the browser when it opens. I don't think this is a StarDock/ObjectDock problem because we also have some other problems with internet links. For example, we cannot move an internet link from the Desktop to the Quick Launch on the task bar. When we do try, it forces us to put the link with the IE icon as part of the IE menu instead of allowing us to have a shortcut there as it's own entry. I should mention however, that links on the desktop and in the taskbar do work as we expect them too (without showing the above error message). It appears that this problem started after installing Windows Updates. Since we installed a whole bunch of updates at once I have no idea which one caused the problem. I did have Google Chrome installed but I uninstalled it since the user wants to use IE. The problem started before I uninstalled Chrome. I also reset the browser settings on IE9. It didn't help. Next I uninstalled IE9 which took me back to IE8. This actually did resolve the problem but the problem came back as soon as I installed IE9 again. We have Verizon Internet Security installed. It's actually a McAfee product rebranded to look like Verizon. I'm not real crazy over this software but the customer has a subscription so we're not planning to change it. I have no reason to believe that this is causing the problem and yet I know that security software is often to blame for strange issues. I've looked at the registry settings for the following keys and everything appears to be ok for every single one of them: HKEY_CLASSES_ROOT\.htm HKEY_CLASSES_ROOT\.html HKEY_CLASSES_ROOT\http\shell\open\command HKEY_CLASSES_ROOT\http\shell\open\ddeexec\Application HKEY_CLASSES_ROOT\https\shell\open\command HKEY_CLASSES_ROOT\https\shell\open\ddeexec\Application HKEY_CLASSES_ROOT\htmlfile\shell\open\command HKEY_CLASSES_ROOT\Microsoft.Website\Shell\Open\Command Edit1: I've found two potential solutions but I won't be able to try them until tomorrow. One is to disable the "Windows Font Cache" service. Another is to clear IE cache and browsing history. I won't be able to try out either solution until tomorrow since this is a remote client's machine. I see there are lots of other suggestions online but if you take the time to read them through you'll see that the other suggestions didn't fix the problem.

    Read the article

  • QNAP (469L) with Debian: can't connect to router

    - by agtoever
    I've been running my QNAP 469L with Debian (Wheezy deb7u3) for a few months. Yesterday I upgraded the memory to 4 GB. The system boots fine, but since the upgrade, I'm not able to connect the server to my router (a TP-Link WR941ND). My configuration: The router runs a DHCP server (192.168.67.100 and up), with a preconfigured ip address for the QNAP (192.168.67.10). The router is on 192.168.67.1. As said, Debian is installed on the QNAP (which can be regarded as a normal computer). Networking hardware on the QNAP: Intel PRO/1000 Network Connection using the e1000e kernel module. This is what I have tried so far: Replace the network cable (tried 3 different cables on different router ports). Check for messages from the kernel: dmesg | grep eth. Besides the normal hardware messages I get a ADDRCONF(NETDEV_UP): eth0: link is not ready for each call to ifup. Manually restart the network sudo server networking restart Check sudo ifconfig (eth0 is up, but no ip addresses). Check the /etc/network/interfaces which has (besides the loopback device) an allow-hotplug eth0 and iface eth0 inet dhcp, which is afaik the default Debian configuration. Since the server has two ethernet ports, I checked if I'm using the right port (checked the hardware address that ifconfig reports for eth0 is the same as the hardware address that is in the preconfigured ip address for the server in the router. Do a manual sudo ifdown eth0 && sudo ifup eth0 with no results (but an extra ADDRCONF(NETDEV_UP): eth0: link is not ready in the kernel log) Do a dhcp request dhclient -v eth0: for about a minute requests are send (according to the terminal) and at the end I get a No DHCPOFFERS received. No working leases in persistent database - sleeping.. Check the router system log if DHCP requests are received. I see them for some devices (my Mac, my iPhone) but not from the QNAP. The log entry looks like: DHCPS:Recv REQUEST from 84:85:06:07:75:6A and then a DHCPS:Send ACK to 192.168.67.101. There are no records from the QNAP's hardware address. So the two error messages that I do get are: ADDRCONF(NETDEV_UP): eth0: link is not ready for every ifup and No DHCPOFFERS received. No working leases in persistent database - sleeping. for every DHCP call.

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >