Search Results

Search found 11313 results on 453 pages for 'calavera info'.

Page 110/453 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Apache 2.2 Present rss http 410 pages as application/rss+xml content type

    - by Mark Bakker
    I have a problem sending http-410 for very old rss feeds. Functional this can happen in one Very old rss feeds where content is not updated anymore / subject could not move to another feed Migration from 3th party site to our site where the rss feed is not longer functional supported I tried several things in my site config see below; <VirtualHost *:80> DocumentRoot /opt/tomcat/webapps/ROOT/ ErrorDocument 500 /error/static/error-500.html ErrorDocument 503 /error/static/error-500.html ErrorDocument 404 /error/static/rss/error-404.html ErrorDocument 410 /error/static/rss/error-410.html # When error pages need to be served by apache, # exclude the files to serve as below (in comment) SetEnvIf Request_URI "/error/static/*" no-jk # force all files to be image/gif: <Location *.rss> #<Location *> #ForceType application/rss+xml </Location> #AddType application/rss+xml .rss #AddType application/rss+xml .xml #AddType application/rss+xml .html JkMount /* rss;use_server_errors=402 # JkMount /* rss RewriteEngine on JkMount /news.rss rss JkMount /documenten-en-publicaties.rss rss RewriteEngine on RewriteRule ^/news.rss$ - [NC,T=application/rss+xml,G,L] RewriteRule ^/documenten-en-publicaties.rss$ - [NC,G,L] # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn ErrorLog "|/usr/bin/logger -s -p local3.err -t 'Apache'" CustomLog "|/usr/bin/logger -s -p local2.info -t 'Apache'" combined ServerSignature Off </VirtualHost> The desired end result should be on /news.rss and /documenten-en-publicaties.rss a 410 page with content in the error page with a content type 'application/rss+xml'

    Read the article

  • Need help with some IIS7 web.config compression settings.

    - by Pure.Krome
    Hi folks, I'm trying to configure my IIS7 compression settings in my web.config file. I'm trying to enable HTTP 1.0 requests to be gzip. MSDN has all the info about it here. Is it possible to have this config info in my own website's web.config file? Or do i need to set it at an application level? Currently, I have that code in my web.config... <system.webServer> <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> <httpCompression cacheControlHeader="max-age=86400" noCompressionForHttp10="False" noCompressionForProxies="False" sendCacheHeaders="true" /> ... other stuff snipped ... </system.webServer> It's not working :( HTTP 1.1 requests are getting compressed, just not 1.0. That MSDN page above says that it can be used in :- Machine.config ApplicationHost.config Root application Web.config Application Web.config Directory Web.config So, can we set these settings on a per-website-basis, programatically in a web.config file? (this is an Application Web.config file...) What have i done wrong? cheers :) EDIT: I was asked how i know HTTP1.0 is not getting compressed. I'm using the Failed Request Tracing Rules, which reports back:- DYNAMIC_COMPRESSION_START DYNAMIC_COMPRESSION_NOT_SUCESS Reason: 3 Reason: NO_COMPRESSION_10 DYNAMIC_COMPRESSION_END

    Read the article

  • Kernel panic while loading Mac OS X on VMWare

    - by Vladimir Gritsenko
    I'm trying to run Mac OS X 10.6.3 on VMWare 7.1 from within Windows 7 64 bit. Trying to run the OS X doesn't work - shortly after booting VMWare announces the machine's death with this pop-up message: The CPU has been disabled by the guest operating system. You will need to power off or reset the virtual machine at this point. The console has a more interesting announcement: The real-time clock was not properly initialized on your system! With a dump of the detected CPU speed (~2.9 GHz), FSB (~94 MHz) and bus ratio (31). Somehow, the code that panics is documented in an accidental .diff file here. Apparently, it commits seppuku if the bus ratio is greater than 30. I underclocked my E6500 to a slower speed, but this apparently didn't make any difference. I can think of two possibilities right now: The CPU info being read is constant, and defines the maximum ability of the CPU. In which case it appears I'm screwed. VMWare presents its own CPU info to the machine, which I can perhaps somehow change. If so, how? If these sound completely off, that's because I'm a real newbie in these matters. Here's hoping you guys can show me the light :-)

    Read the article

  • Apache Derby running within Tomcat causes shutdown issues

    - by Luke
    I have set up Derby Network Server to be hosted within a Tomcat environment. This works great. However, when I shut down Tomcat I get the following errors: 04/01/2011 10:41:41 AM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.ClientDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc SEVERE: The web application [/derby] registered the JBDC driver [org.apache.derby.jdbc.AutoloadedDriver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [derby.NetworkServerStarter] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [NetworkServerThread_4] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_5] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/derby] appears to have started a thread named [DRDAConnThread_13] but has failed to stop it. This is very likely to create a memory leak. 04/01/2011 10:41:41 AM org.apache.coyote.http11.Http11Protocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 I'm currently starting and stopping Tomcat with the following commands: ./catalina run ./catalina stop Is there a better way to shutdown Tomcat with Derby or can this be solved by a configuration change?

    Read the article

  • OpenLDAP, howto allow both secure (TLS) and unsecure (normal) connections?

    - by Mikael Roos
    Installed OpenLDAP 2.4 on FreeBSD 8.1. It works for ordinary connections OR for TLS connections. I can change it by (un)commenting the following lines in slapd.conf. # Enable TLS #security ssf=128 # Disable TLS security ssf=0 Is there a way to allow the clients to connect using TLS OR no-TLS? Can the ldap-server be configured to support both TLS connections and no-TLS connections? Tried to find the information in the manual, but failed: http://www.openldap.org/doc/admin24/access-control.html#Granting%20and%20Denying%20access%20based%20on%20security%20strength%20factors%20(ssf) http://www.openldap.org/doc/admin24/tls.html#Server%20Configuration Tried to read up on 'security' in manualpage for ldap.conf, didn't find the info there either. I guess I need to configure the 'secure' with some negotiation mechanism, "try to use TLS if client has it, otherwise continue using no-TLS". Connecting with a client (when slapd.conf is configure to use TLS): gm# ldapsearch -x -b '' -s base '(objectclass=*)' namingContexts ldap_bind: Confidentiality required (13) additional info: TLS confidentiality required gm# ldapsearch -Z -x -b '' -s base '(objectclass=*)' namingContexts (this works, -Z makes a TLS connection) So, can I have my ldap-server supporting client connections using TLS and ordinary (no-TLS) connections? Thanx in advance.

    Read the article

  • Lost partition after restarting

    - by nxhoaf
    I have Window 7 Professional Service pack installed in my Laptop Lenovo Thinkpad t420. After formatting the disk, and install Window 7 (detailed as above), I went to Computer -- Manager -- Storage -- Disk Management to split my 300gb C partition into 2 partition: C (which is 162gb) E (which is 140gb) Is work fine for about 2 days. Today, when I turn on my computer, I'm very suprise that the E partition is disappear. I can surely confirm that I didn't do any stupid thing yesterday. And before I shut down my computer, everything was fine. In general, here is what I did during the last today (from the point that I formatted the disk, and installed Window) Format 300gb hard disk Install window 7 Install eclipse, db2, .... ( I'm a developer) Install some other tools (Open office, Skype...) Install PGP (http://www.symantec.com/encryption) <--- I'm forced to used that due to my company policy Use Computer -- Manager -- Storage -- Disk Management to split my 300gb C partition into 2 partition as described above. It worked quite well for two last days. Until day... Can you please help me to recover my lost partition ? Thank you! For more info, here is my partition info: You can also see the image here

    Read the article

  • update all the servers through one virtual servers using Storage are network virtual machine

    - by Mr.Calm
    Using UBUNTU and Virtal Box by Oracle, and Using this script to start nginx in Virtual Box, and placing it in Virtual box inside~/init.d #!/bin/bash ### BEGIN INIT INFO # Provides: Testinit # Required-Start: # Required-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO # RETVAL=0; start() { CurrentTime=$(date +%d/%m/%Y"-"%I:%M:%S) ./usr/local/nginx/sbin/nginx echo "Current Time:"$CurrentTime>>/home/server/Desktop/NginxLogs.txt echo "!Starting nginx!" >>/home/server/Desktop/NginxLogs.txt Like this i want to write auto script (setup.sh file) and place that script in all virtual boxes inside my system, for example 8 virtual boxes and in all Virtual boxes NGINX is installed. Now, The thing is i am facing problem when i want change something in setup.sh i have to go to each and every virtual box, or Communicate each Virtual machine through SSH from my main machine. i am thinking to write another script (ex: Update.sh),and inside that script we give one path of file which is saved and recently edited in main machine (ex: DummySetup.sh). as soon as i run that script all the setup.sh files which are saved in each virtual machines should update the change or replace contents with DummySetup.sh's contents. Hope this is possible thing. Help would be appreciated.Thanking you

    Read the article

  • QuickTime Player sounds much better than iTunes

    - by Gene Goykhman
    I am playing a 320 kpbs encoded music MP3 in iTunes and the sound is substantially worse than the exact same file played back in QuickTime Player (Max OS X 10.8.5). I have maxed out system volume and iTunes playback volume. I have disabled all the audio processing features in iTunes (equalization, sound enhancer, etc.) The audio coming from iTunes still sounds resampled and/or processed, whereas QuickTime Player appears to be playing it "as is". Even when I Get Info on the MP3 file in Finder and play it back directly from the Get Info window it sounds good. It's just iTunes that seems to be mangling the song. I can notice a difference on virtually all my music, so it's not just one particular MP3. I suspect the issue is that iTunes is doing some kind of audio processing but I can't find a way to turn it off. This is the newest iTunes (11.1), but the problem has probably been going on for a while... I just switched to decent earbuds and started noticing the difference. What's the best way to force iTunes to play back the file as-is, or as close as possible to how QuickTime Player/Finder would play it?

    Read the article

  • Sendmail in Nexenta core 3.0.1

    - by maximdim
    I'm trying to setup sendmail in Nexenta core 3.0.1 (Solaris based OS). All I want is to be able to send emails from that host - like notifications about failures, cron jobs output etc. Initially Nexenta core doesn't have sendmail so here is what I've done: apt-get install sunwsndmu Now there is a sendmail in /usr/sbin/sendmail. When I try to send email from command line: $mail maxim test . It doesn't give me any error but in log file I see: Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: from=maxim, size=107, class=0, nrcpts=1, msgid=<201012201741.oBKHf8u7012295@nas>, relay=maxim@localhost Dec 20 12:41:08 nas sendmail[12295]: [ID 801593 mail.info] oBKHf8u7012295: to=maxim, ctladdr=maxim (1000/10), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30107, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection refused by [127.0.0.1] So I guess I need to have SMTP service running. How do I do that in Nexenta? svcs -a | grep sendmail doesn't return anything and # svcadm enable sendmail svcadm: Pattern 'sendmail' doesn't match any instances I'm not married to sendmail so if there are easier ways to achieve y goal I'm open to suggestions as well. Thanks,

    Read the article

  • "cannot receive new filesystem stream: invalid backup stream" error when unpacking flash archive on solaris 10

    - by Bovril
    I've searched around but i'm having no luck with some peculiar behavior with a flash archive. I'm using HP Server Automation 9.14 to deploy the OS. I'm creating a Solaris 10 flash archive to create a snapshot default build in our environment. I create the flash archive with # flar create -c -S -n g8-solaris10-u10 g8-solaris10-u10.flar It seems to create the file without any problems (exit status 0). When deploying to a new system (same hardware), it extracts to a point and then bails. The last error in the log I can see is Extracted 2047.00 MB ( 82% of 2488.98 MB archive) ERROR: Could not read file (172.27.118.100:/media/opsware/sunos/flar/g8-solaris10-u10.flar ERROR: Errors occurred during the extraction of flash archive. The file /tmp/flash_errors contains the list of errors encountered ERROR: Could not extract Flash archive ERROR: Flash installation failed The error log contained the following message cannot receive new filesystem stream: invalid backup stream A previous version of this flash archive (1.8gb) worked ok, so I suspect size may be a factor. The source system (the one the flash archive is an image of) is an HP BL460C GEN8 some more info below. OS version Info # uname -a SunOS testhostname 5.10 Generic_147441-01 i86pc i386 i86pc # who -r . run-level 3 Oct 15 08:15 3 0 S disks # echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <DEFAULT cyl 17841 alt 2 hd 255 sec 63> /pci@0,0/pci8086,3c06@2,2/pci103c,3355@0/sd@0,0 Specify disk (enter its number): Specify disk (enter its number): zpools # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT rpool 136G 24.6G 111G 18% ONLINE - Zones # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared The file size of 2047 seems suspiciously close to 2048, which is concerning. Any help would be greatly appreciated. Thanks

    Read the article

  • .htaccess error "not allowed here" for all for all instructions

    - by andres descalzo
    I am using Debian Lenny and Apache 2. I changed the default .htaccess file with: AllowOverride AuthConfig But I always get the error message not allowed here when putting any instructions in the .htaccess file. EDIT: file default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks Order allow,deny Allow from all AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks Includes #AllowOverride All #AllowOverride Indexes AuthConfig Limit FileInfo AllowOverride AuthConfig Order allow,deny Allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> .htaccess: #Options +FollowSymlinks # Prevent Directoy listing Options -Indexes # Prevent Direct Access to files <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> # SEO URL Settings RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] PHP info: apache2handler Apache Version = Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny10 with Suhosin-Patch Apache API Version = 20051115 Server Administrator = webmaster@localhost Hostname:Port = hw-linux.homework:80 User/Group = www-data(33)/33 Max Requests = Per Child: 0 - Keep Alive: on - Max Per Connection: 100 Timeouts = Connection: 300 - Keep-Alive: 15 Virtual Server = Yes Server Root = /etc/apache2 Loaded Modules = core mod_log_config mod_logio prefork http_core mod_so mod_alias mod_auth_basic mod_authn_file mod_authz_default mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_cgi mod_deflate mod_dir mod_env mod_mime mod_negotiation mod_php5 mod_rewrite mod_setenvif mod_status

    Read the article

  • How can I remotely tell what brand/model internal SCSI card is installed in a machine?

    - by edmicman
    I am doing some consulting work for a previous employer upgrading and migrating old servers to new hardware. There is an existing file server (HP ProLiant DL380) that has an tape backup drive connected; it is using a SCSI interface and I'm pretty sure it's using an internal SCSI card. They are upgrading to a new server hardware (HP ProLiant DL160 G6). The old server is 2U, the new one 1U and we would want to move the tape drive to the new server, too. I'm trying to figure out if the SCSI card in the old server would be able to be installed in the new one or if we'll need to source a new card; mostly I don't know for sure the height of the card and if it's low-profile enough that it would fit in the new server. There is not much of a technical resource onsite and the old server is in-use anyway so I would like to avoid making a trip in myself or trying to have someone onsite pop open the case and tell me what card is there. It's running Windows Server 2003 - is there a way to tell from say Device Manager what make and model the SCSI card might be? Or any other system diagnostic program or something that would give me hardware info like that? Thanks for any info!

    Read the article

  • Can't reset Windows 7 Registry permissions.

    - by n10i
    hi all, i am trying to reset win 7 registry permissions using secedit /configure /cfg %windir%\inf\defltbase.inf /db defltbase.sdb /verbose /areas REGKEYS But i am receiving the following error: An extended error has occurred. The task has completed with an error. See log %windir%\security\logs\scesrv.log for detail info. The content Of the log file: ------------------------------------------- Friday, April 16, 2010 1:50:43 PM ----Configuration engine was initialized successfully.---- ----Reading Configuration Template info... ----Configure 64-bit Registry Keys... Configure users.default. Warning 5: Access is denied. Error taking ownership of users.default\software\SetID. Warning 5: Access is denied. Error opening users.default\software\SetID. Warning 5: Access is denied. Error setting security on users.default\software\SetID. Configure machine\software. Warning 5: Access is denied. Error setting security on machine\software. Warning 1336: The access control list (ACL) structure is invalid. Error setting security on machine\software\Macrovision. Configuration of Registry Keys was completed with one or more errors. ----Configure 32-bit Registry Keys... Configure machine\software. Warning 1336: The access control list (ACL) structure is invalid. Error setting security on machine\software\Audible. Configuration of Registry Keys was completed with one or more errors. ----Un-initialize configuration engine... plz! help me guys!

    Read the article

  • Failed to mount to nfs server with "Program not Registered"

    - by Farrel
    I'm trying to setup nfs server on Fedora 17 and I'm getting "Program not Registered" error when I'm trying to mount. I guess the main reason for this is rpcbind. I'm a newbie in linux, so I don't know what info should I provide you with. Here is some info that might be useful. rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 20048 mountd 100005 1 tcp 20048 mountd 100005 2 udp 20048 mountd 100005 2 tcp 20048 mountd 100005 3 udp 20048 mountd 100005 3 tcp 20048 mountd 100024 1 udp 42223 status 100024 1 tcp 50054 status cat /etc/exports /home/Farrel/prog 192.168.xxx.xxx (ro,sync) service nfs status Redirecting to /bin/systemctl status nfs.service nfs-server.service - NFS Server Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled) Active: active (exited) since Fri, 02 Nov 2012 09:29:04 +0300; 5min ago Process: 924 ExecStartPost=/usr/lib/nfs-utils/scripts/nfs-server.postconfig (code=exited, status=0/SUCCESS) Process: 909 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS ${RPCNFSDCOUNT} (code=exited, status=0/SUCCESS) Process: 885 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS) Process: 864 ExecStartPre=/usr/lib/nfs-utils/scripts/nfs-server.preconfig (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/nfs-server.service Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable. Firewall is disabled on both systems. I spent a lot of time reading on the topic but all manuals on setting up nfs server lead to "Program not Registered" error. Any how-to-fix-it ideas?

    Read the article

  • Windows 2012 Server: Enable .NET 3.5

    - by Meengla
    I have a Windows 2012 Server which needs .NET 3.5 installed. For background info/solutions, please see this: http://sqlblog.com/blogs/sergio_govoni/archive/2013/07/13/how-to-install-netfx3-on-windows-server-2012-required-by-sql-server-2012.aspx I have tried this but it doesn't work for me. Here is what I am trying: Using a Windows 2012 ISO file, 'mount' on the 2012 Server as 'D' drive and then tried both GUI and Command prompt. In case of GUI, I specified the 'alternate source' path to the D drive's 'source/sxs folder but that failed without giving enough info. In case of the command prompt, here is what's happening: dism /online /enable-feature /featurename:NetFx3 /source:d:\sources\sxs I get error: Installed but Parent feature not enabled. So I tried another approached: dism /online /enable-feature /featurename:NetFx3 /all /source:d:\sources\sxs the above command, per http://blogs.technet.com/b/askcore/archive/2012/05/14/windows-8-and-net-framework-3-5.aspx is supposed to enable parent elements; but running this I get error like 'source not found'. Is there some error in my second command? What else I could do? This is Windows 2012 Server Standard edition. Thanks!

    Read the article

  • Ubuntu: Getting rid of a mimetype entry

    - by Epaga
    I have a pesky mimetype entry that I can't seem to get rid of. Here is the current situation: xdg-mime query filetype myfile.mfe application/pesky Using assogiate I have found out the information about this mime type entry (but can't delete it there). I have the following 'pesky.xml' XML file which was used to create the mime type (as far as I can tell, since it exactly matches the entry in assogiate...): <?xml version='1.0'?> <mime-info xmlns='http://www.freedesktop.org/standard'> <mime-type type="application/pesky"> <comment>my pesky type</comment> <glob pattern="*.mfe"/> <magic priority="100"> <match type="string" offset="0" value="application/pesky"/> </magic> </mime-type> <mime-info> However, the following has no effect: sudo xdg-mime uninstall --mode system --novendor pesky.xml The file association remains. Any ideas?

    Read the article

  • Cannot Install Bootcamp on Mac Fusion Drive

    - by user3377019
    I have two drive installed in my mid-2013 Macbook pro. It is running osx maverick. I set the two drives (an ssd and a regular HD) up using the fusion drive (following the diy fusion drive guides out there). I went ahead and created a 40gb partition to host my windows install. I removed the SSD, installed windows, then reinstalled the SSD. As soon as the ssd was placed back in the bootcamp partition stopped booting. I get a blinking cursor on a black screen. I checked out the partition info in disk utility and it appears that the windows partition is not marked bootable. Below is some info I managed to gather. I am wondering if there is a way to fix the partition table so my bootcamp will boot. /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *120.0 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 119.7 GB disk0s2 3: Apple_Boot Boot OS X 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk1 1: EFI EFI 209.7 MB disk1s1 2: Microsoft Basic Data BOOTCAMP 40.0 GB disk1s2 3: Apple_CoreStorage 459.2 GB disk1s3 4: Apple_Boot Boot OS X 650.0 MB disk1s4 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Macintosh HD *573.4 GB disk2 Name : BOOTCAMP Type : Partition Disk Identifier : disk1s2 Mount Point : /Volumes/BOOTCAMP File System : Windows NT File System (NTFS) Connection Bus : SATA Device Tree : IODeviceTree:/PCI0@0/SATA@1F,2/PRT1@1/PMP@0 Writable : No Universal Unique Identifier : 584BAED6-4C46-4F18-93B3-957F6E27003C Capacity : 40 GB (39,998,980,096 Bytes) Free Space : 16.34 GB (16,339,972,096 Bytes) Used : 23.66 GB (23,659,003,904 Bytes) Number of Files : 86,424 Number of Folders : 0 Owners Enabled : No Can Turn Owners Off : No Can Be Formatted : No Bootable : No Supports Journaling : No Journaled : No Disk Number : 1 Partition Number : 2

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • Google Maps mashup for notes/househunting

    - by afray
    I'm house-hunting at the moment and I'm trying to geek it out er I mean streamline the decision-making process. I'm currently using google maps's "my maps" feature to store pins to properties. I create one map per estate agent, then put relevant into into the individual pins. The idea being, I can look at the map and quickly choose which property to view next. However the pins don't currently link back to the map they're owned by, so you have to hunt a bit to get the estate agent info, it's a hassle to get all maps displayed in a new session if you have lots of agents, and each pin doesn't automatically show its bubble so you have to do lots of clicking to see all the info you want. I've tried Evernote, but despite its tag system initially showing some promise, I can't find a way to seemlessly integrate maps. A few google searches don't turn anything up either. Even the big sites, like http://www.rightmove.co.uk, don't seem to provide any maps integration by default. You can see an individual house's location, but not all results of a search. So is there a web site or windows program I could use to do something like this? Viewing all properties on a map is a must, as is quick access to contact details.

    Read the article

  • ping/ssh networking problem with server from 1 particular windows xp laptop

    - by user47650
    I am experiencing an odd problem with one specific server at my data centre connecting from my laptop. Basically the server is accessible from other machines in my house, but not from 1 particular laptop which is running windows XP. I have setup tcpdump on the server and wireshark on the laptop, and I can see ping echo request and reply packets that actually make it back to the wireshark on the laptop, but nothing shows in the ping console output like so; $ ping xxx.55.32.255 Pinging xxx.55.32.255 with 32 bytes of data: Request timed out. Request timed out. Request timed out. Request timed out. Ping statistics for xxx.55.32.255: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), But I can see from the wireshark on my local laptop that the ping reply gets back... No. Time Source Destination Protocol Info 46 3.964474 192.168.1.64 xxx.55.32.255 ICMP Echo (ping) request Frame 46 (74 bytes on wire, 74 bytes captured) Ethernet II, Src: Intel_31:d3:01 (00:19:d2:42:c3:01), Dst: ThomsonT_01:b8:2c (00:14:7f:02:b9:3c) Internet Protocol, Src: 192.168.1.64 (192.168.1.64), Dst: xxx.55.32.255 (xxx.55.32.255) Internet Control Message Protocol No. Time Source Destination Protocol Info 48 4.119060 xxx.55.32.255 192.168.1.64 ICMP Echo (ping) reply Frame 48 (74 bytes on wire, 74 bytes captured) Ethernet II, Src: ThomsonT_01:b8:2c (00:14:7f:01:b8:2c), Dst: Intel_21:c3:01 (10:20:d2:31:c3:01) Internet Protocol, Src: xxx.55.32.255 (xxx.55.32.255), Dst: 192.168.1.64 (192.168.1.64) Internet Control Message Protocol obviously I have disabled the windows firewall and there is nothing in the windows event log. There is nothing else obviously strange about the server as it is the same build as other servers that I can connect to fine.

    Read the article

  • Apache httpd LDAP integration

    - by David W.
    I am configuring a CollabNet Subversion integration. I have the following collabnet_subversion.conf file: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=vegibanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" </Location> This works great. Any user in our Active Directory can access our Subversion repository. Now, I want to limit this to only people in the Active Directory group Development: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=VegiBanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" Require ldap-group CN=Development OU=Security Groups OU=VegiBanc, dc=vegibanc, dc=com </Location> I added Require ldap-group, but now no one can log in. I have LogLevel set to debug, but all I get is this in my error_log (Single line broken up for easier reading): [Thu Oct 11 13:09:28 2012] [info] [client 10.55.9.45] [6752] vauth_ldap authenticate: user dweintraub authentication failed; URI /svn/ [ldap_search_ext_s() for user failed][Bad search filter] And, I get this in my access_log: 10.55.9.45 - - [11/Oct/2012:13:09:27 -0500] "GET /svn/ HTTP/1.1" 401 401 10.55.9.45 - dweintraub [11/Oct/2012:13:09:28 -0500] "GET /svn/ HTTP/1.1" 500 535 Yes, I am in that group. (Or, at least how can I confirm that just to make sure that's not the issue. I have the SysinternalsSuite ADExplorer. It's where I'm getting all of my info.)

    Read the article

  • How to enable error log in lighttpd properly?

    - by Tomaszs
    I have a Centos 5 system with Lighttpd and fastcgi enabled. It does log access but does not log errors. I have Internal Server Error 500 and no info in log and when I try to open not -existing file also - no info in error log. How to enable it properly? Below is list of modules that I've enabled: server.modules = ( "mod_rewrite", "mod_redirect", "mod_alias", # "mod_access", # "mod_cml", # "mod_trigger_b4_dl", # "mod_auth", "mod_status", "mod_setenv", "mod_fastcgi", # "mod_webdav", # "mod_proxy_core", # "mod_proxy_backend_fastcgi", # "mod_proxy_backend_scgi", # "mod_proxy_backend_ajp13", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) Here are setting of debugging: ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" debug.log-file-not-found = "enable" #debug.log-condition-handling = "enable" Setting of path to error and access log: ## where to send error-messages to server.errorlog = "/home/lxadmin/httpd/lighttpd/error.log" #### accesslog module accesslog.filename = "/home/lxadmin/httpd/lighttpd/ligh.log" Settings of fastcgi: fastcgi.debug = 1 fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 12, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "2", "PHP_FCGI_MAX_REQUESTS" => "500" ) ))) And in included config file I have: server.errorlog = "/home/httpd/mywebsite.com/stats/mywebsite.com-error_log" What comes to log files: /home/httpd/mywebsite.com/stats/ -rw-r--r-- 1 apache apache 5173239 May 16 11:34 mywebsite.com-custom_log -rwxrwxrwx 1 root root 0 Mar 27 2009 mywebsite.com-error_log /home/lxadmin/httpd/lighttpd/ -rwxrwxrwx 1 apache apache 2184 Apr 22 22:59 error.log -rwxrwxrwx 1 apache apache 6088621 May 16 11:26 ligh.log I gave error logs chmod 777 for a try to check if it's the issue, but apparently it's not. So my question is: what to do to have error log enabled?

    Read the article

  • Win7 Command Prompt drives not available

    - by jmerrill
    I have the opposite problem compared to the author of this question: Hard drive access denied from Windows Explorer (but works from command prompt as Admin) I can see all the drive letters for a particular server in Windows Explorer, and can navigate through them exactly as would be expected. The drive letters are displayed in Explorer in parens to the right of the path info -- finalpathportion (\\server\otherpathportions) (driveletter:) e.g. jmerrill (\\server\users) (H:) But the drive letters are not usable in a "Run as Administrator" command prompt. They have worked in the past, but I have since rebooted. I thought that perhaps I had to start a new command prompt having visited them in Explorer -- but that did not help. "net use" in the command prompt shows Unavailable H: \\server\users\jmerrill Microsoft Windows Network with similar info for the other drives. I can do net use h: /d net use h: \\server\users\jmerrill for each drive, and get the letters to be available in the command prompt. It is perhaps obvious that I don't think that it should be necessary to do that. Does anyone have any ideas?

    Read the article

  • The RTL8111/8168B NIC under Linux and the r8168 driver

    - by nik
    So I've got one of the infamous R8168 Realtek ethernet NIC, which have some problems under Linux. After some research, I found out I had to use the r8168 driver for this card (and not the r8169 which still loads when nothing else is available), which I did. So now everything works fine... Sort of. My download and upload rates are more than halved compared to what I should get. When I test (with eg. speedtest) I get something like 20M (often 15M) in download and 30M in upload, but if I test under Windows (everything is otherwise identical: same ethernet cable, same connection, at the same time of the day (well 5 min apart)...), I get 50M upload/download (which is what I expect). Where can it come from? Here's some info: ~ # lspci [...] 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) ~ # modinfo r8168 filename: /lib/modules/3.2.1-gentoo-r2/net/r8168.ko version: 8.027.00-NAPI license: GPL description: RealTek RTL-8168 Gigabit Ethernet driver author: Realtek and the Linux r8168 crew <[email protected]> srcversion: 0A6E9F1D4E8E51DE4B6BEE3 alias: pci:v00001186d00004300sv00001186sd00004B10bc*sc*i* alias: pci:v000010ECd00008168sv*sd*bc*sc*i* depends: vermagic: 3.2.1-gentoo-r2 SMP mod_unload [...] ~ # mii-tool -v eth0: negotiated 100baseTx-HD, link ok product info: vendor 00:07:32, model 17 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD

    Read the article

  • Emails sent through SMTP on VPS are considered to be spam

    - by Ilya
    During our business we have to make regular mailing to our clients: invoices, information emails, etc. Previously we received and sent emails using mail server of our hosting provider. But as the number of clients increased, we have to order VPS and install our own SMTP server their for performing our mailings. So, now we have default provider mail server for receiving emails, let it be business.com. We have email accounts like [email protected], etc. We use this mail server to receive emails and manage our email accounts. And we have SMTP server which is running on VPS. We use this SMTP only for sending emails with From addresses like [email protected]. VPS has default DNS records created by provider, let it be IP.AD.RE.SS <- ip-ad-re-ss.provider.com. Mailings are made using either desktop email clients or custom Java-based application which uses JavaMail for sending emails. The problem is that most of emails sent by us are placed in spam folders in clients email accounts. Clients have their email in Gmail, Yahoo, Hotmail, etc. Could you please tell what is the most probable reason and solution of described problem? Are there any service in Intranet where we can send test email and get an answer with description why this email could be considered to be spam?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >