Search Results

Search found 66534 results on 2662 pages for 'document set'.

Page 545/2662 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • Use /etc/aliases to send to a mailing-list using sendmail

    - by Pixoo
    I'm trying to configure my Debian (6.0) server to forward emails sent to root to a mailing-list. I set the ML address in /etc/aliases this way: root: [email protected] I then called newaliases. It doesn't work but when I call mail from the command line it works :/ echo "test" | mail -s"test message" [email protected] If I set my own email address in /etc/aliases it's working too. Any idea where it could come from ? Thanks

    Read the article

  • Opscenter repair service times out. ERROR: Requested range intersects a local range [...]

    - by jlemire-zs
    My production cluster had the repair service enabled since april 16th with the default 9 days time to completion and repairs would complete properly. However, since may 22nd, it is being disabled automatically by Opscenter: From /var/log/opscenter/opscenterd.log: [...] 2014-06-03 21:13:47-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4019838962446882275L, -4006140687792135587L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service [...] From /var/log/opscenter/repair_service/zs_prod.log: [...] 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) has failed 1 times. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: 101 errors have ocurred out of 100 allowed. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service On the nodes on which the repair fails, from /var/log/cassandra/system.log: ERROR [RMI TCP Connection(93502)-10.1.0.22] 2014-06-03 20:12:28,858 StorageService.java (line 2560) Repair session failed: java.lang.IllegalArgumentException: Requested range intersects a local range but is not fully contained in one; this would lead to i mprecise repair at org.apache.cassandra.service.ActiveRepairService.getNeighbors(ActiveRepairService.java:164) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:128) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:117) at org.apache.cassandra.service.ActiveRepairService.submitRepairSession(ActiveRepairService.java:97) at org.apache.cassandra.service.StorageService.forceKeyspaceRepair(StorageService.java:2620) at org.apache.cassandra.service.StorageService$5.runMayThrow(StorageService.java:2556) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) These errors, which only occurs if the repair service is running, are the only errors these nodes experience. Outside of the repair task, the Cassandra cluster works perfectly. I am running Opscenter 4.1.2 with a 6 nodes DSE 4.0.2 cluster installed on linux virtual machines. The nodes run a vanilla installation of Ubuntu Server 12.04 64-bit and DSE was installed and secured according to the provided installation documentation. I have been experiencing that problem on my development cluster for a while too (with DSE 4.0.0, 4.0.1 and 4.0.2), but I thought this was because of some configuration error on my part. The problem has appeared spontaneously at some point too. The Cassandra cluster has been working very smoothly with a good write throughput. It is very stable and has enough resources to work with. We did not notice any problems with the applications that depend on it.

    Read the article

  • windows 7: Can I make it automatically download updates during one period and install them in anothe

    - by askvictor
    When Windows 7 is configured to automatically update at a certain time (say Friday 5pm), is that also when it tries to download updates, or does it pull in the updates throughout the week, and only installs them at the update time? UPDATE: So I actually want the machines to download during one set of times (9am-3pm Mon-Fri), but install during another set of times (e.g. 5pm Friday). Is there a way to achieve this?

    Read the article

  • Multiple user directories on EC2

    - by Joseph
    Im trying to set up multiple user directories on EC2 running Ubuntu, but im not sure how to set it up correctly so that i can serve files in the following format: http://<ec2 ip address>/user_1/public_html/file1.html and http://<ec2 ip address>/user_2/public_html/file3.html and so on for every user that i add. I tried looking for the httpd.conf file but i coulndt find it i only found apache2.conf Thank you guys.

    Read the article

  • Default setting for dual monitors (using Win+P)

    - by Tomek
    I have two displays (one monitor and one TV) connected to my card via DVI. I switch between them with Win+P (when I'm using XBMC I set "only projector"). Most of the time, I use it during late hours, so I just turn off the PC (with XBMC's option "shutdown") and go to sleep. All fine, but on the next day my display is changed (to "only projector"). Bottom line is - how can I set my Windows 7 to have "only monitor" each time system boots?

    Read the article

  • MySQL query cache is enabled but not being used

    - by Yoga
    I've checked the query cache is enabled mysql> SHOW VARIABLES LIKE 'have_query_cache'; +------------------+-------+ | Variable_name | Value | +------------------+-------+ | have_query_cache | YES | +------------------+-------+ 1 row in set (0.00 sec) But seems it is not being used mysql> SHOW STATUS LIKE 'Qcache%'; +-------------------------+----------+ | Variable_name | Value | +-------------------------+----------+ | Qcache_free_blocks | 1 | | Qcache_free_memory | 16759648 | | Qcache_hits | 0 | | Qcache_inserts | 0 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 21555882 | | Qcache_queries_in_cache | 0 | | Qcache_total_blocks | 1 | +-------------------------+----------+ 8 rows in set (0.00 sec) Any reason?

    Read the article

  • A way to extract layer sets from PSD files without Photoshop?

    - by chaddjohnson
    Are there any tools for (in order of preference) Mac, Linux, or Windows to extract images per layer set from a PSD file, without using Photoshop? I had a web designer send me a PSD file containing multiple pages, and each page is apparently in a layer set, and GIMP cannot handle these--it can't understand layer groupings. a command line tool, a rinky-dink Windows tool...anything will work, so long as it's free (preferable) or under $30, does not leave watermarks or anything of the sort, and works well.

    Read the article

  • Remote access to server via service control panel for non-admin user in Windows 2008

    - by user2278
    I'm trying to configure my Windows 2008 servers so that my developers can view their status without needing to log on to the box or be an admin. Unfortunately, the permissions set in Windows 2008 for remote non-admin users don't include the ability to enumerate or otherwise query services. This causes anything that contacts the SCM on the far end to fail (Win32_Service, sc.exe, services.msc etc). How do I set up permissions so that they can at least list the services and see if they are running?

    Read the article

  • Office documents on intranet all requiring second login and can't pass auth? Disable webdav?

    - by DOTang
    I am not sure what is going on, but recently all the Office documents on our intranet get prompted a second time for login and according to the error logs it looks like it's trying to use webdav to open (an editable?) version of the document to save directly on the server? We have no sharepoint server setup or anything, but this shouldn't be happening. All I want is for the document to be saved or opened from a local copy in temp like normal. Here is the log: Line 57499: 2011-04-12 15:57:10 (ip) OPTIONS (address) - 443 (username) (user ip) Microsoft-WebDAV-MiniRedir/6.1.7601 - 401 1 1326 1525 238 0 Line 57500: 2011-04-12 15:57:10 (ip) OPTIONS (address) - 443 (username) (user ip) Microsoft-WebDAV-MiniRedir/6.1.7601 - 401 1 1326 1525 238 0 Line 57501: 2011-04-12 15:57:10 (ip) OPTIONS (address) - 443 (username) (user ip) Microsoft-WebDAV-MiniRedir/6.1.7601 - 401 1 1326 1525 238 0 The log basically contains a bunch of these. How can I disable this behavior so that office documents that are downloaded aren't attempted to be used through webdav?? Edit: I should clarify behavior, it asks if you want to save or open it, upon choosing open open, it asks to re-authenicate, you put in the user information and the login box comes up 3 times acting like you entered the wrong password. For some users, after passing the login box the third time, it still opens up, for others their browser just locks up. It also doesn't even look like webdav is installed on our server, I see no config options in IIS for it as outlined on this page: http://learn.iis.net/page.aspx/350/installing-and-configuring-webdav-on-iis-7/#001

    Read the article

  • How to configure mercurial access controls using apache and hgweb?

    - by Gj1
    I have set up a mercurial repo to be served using apache+wsgi+hgweb on OS X. It is now completely open to anyone who stumbles upon my server on the correct port number.. How can I set it up so that only people with a username+password pair that I approve can pull and/or push from the repo? I know how to very easily achieve this using ssh, but in this specific case the requirement is that the solution doesn't require defining full fledged user accounts on the machine for each person whom I'd like to give access to the repo.

    Read the article

  • Apache https is slow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With https (KeepAlive on), I can get over 3000 requests per second. However, with https (KeepAlive off), I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • Delete a windows group in Active Directory

    - by Jim
    I am doing a cleanup of some AD groups that are no longer used. One of the AD groups I could not delete because it seems that a member has this group set as the primary group (which I assume someone did by accident). Is there an easy way to find out who has this group set as primary?

    Read the article

  • Amazon S3 not sending Content-Type header

    - by Luke____
    I have an application that downloads content from various sources. It relies on the "Content-Type" header being set on images. The majority of web-servers do this correctly but it appears Amazon S3 server is not setting the Content-Type. I assume Amazon servers are configured correctly so what could be the problem? Are these images not uploaded correctly? Or should I not be relying on content type being set? Example Thanks

    Read the article

  • IPV6 - using tunnel broker / dns to provide ipv4 compatibility

    - by Bgnt44
    I've a tunnel broker from he.net associate with a IPV6 /64 subnet As a newby to ipv6, i've just discovered that its not reliable to only set a subdomain to a AAAA entry : because most of ISP will not be able to reach it Considering i got 3 vm, each with ipv6 ip , i would like to know if there any way to set up my dns to handle that ? I only got one IPV4 which is binded on the firewall maybe the tunnel can resolve hostname/ipv4 to ipv6 ? Thank you

    Read the article

  • Manual Http error response code in non-existent folder via routing

    - by Slytherin
    Apache server running on ubuntu-like linux I am getting unexpected behaviour when i try to manually send error response. If my .htaccess is responsible for the error response , then appropriate error document is loaded and displayed , with according response code in browser console. However , if my router is origin of the response code , then i get blank screen , but correct response code. .htaccess looks like this RewriteEngine On # RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(css|js|icon|zip|rar|png|jpg|gif|pdf)$ index.php [L] ErrorDocument 404 /err/404.html ErrorDocument 403 /err/403.html ErrorDocument 500 /err/500.html part of my router that sends the response is the following header("HTTP/1.1 403 Forbidden"); trying this format didnt help either header("HTTP/1.1 403 Forbidden", TRUE, 403); I also tried HTTP/1.0. Furthermore i was thinking that maybe relative path to error page might be an issue , but discarded this idea after attempting to access a document that is forbidden via .htaccess EDIT I should also point out , this scenario happens when URL for not-existing article is requested. Is it possible that Server is looking for a .htaccess file in a folder based on URL ? Eg: domain/blog/non-existent , is server looking for blog folder ? I am specifically asking this because there is no blog folder

    Read the article

  • Maximum number of monitors Win 7 Supports?

    - by cpuguru
    I want to set up a system with (9) monitors arranged in a 3x3 grid. Is this configuration possible with Windows 7? Does it max out at a set number of monitors? Any suggestions about known working hardware combinations would be much appreciated. Currently considering (3) nVidia Quadro NVS420 cards to drive the array.

    Read the article

  • Save and restore multiple layers within a Photoshop action that flattens

    - by SuitCase
    I'm editing comic pages with layers - "background", "foreground", "lineart" and "over lineart". I have a Photoshop action that includes a Mode-Bitmap command, which requires the image to be flattened. I need this part of the action because I use the Halftone Screen method of reducing the greyscale image to bitmap on the "background" layer, creating a certain effect. I am pretty sure there is no filter or anything else that gives the same effect. After the mode is changed to bitmap, my action changes things back to greyscale for further changes. This poses a problem. I only want to do the bitmap mode change on the background layer, and after I do the change I want to restore the layer structure as it was - with the foreground, lineart and over lineart layers back above the now-halftoned background. My current method of saving these layers and restoring them is clumsy. My action is able to automatically save the "foreground" layer by selecting it, cutting it, then pasting it back in after the mode changing is over. But, for the "ink" and "over ink" layers, I have to manually cut these layers, paste them into a new document, and later re-cut and re-paste after running my action. This is so clunky! What I would like to know is if it's possible to set aside my layers in an automated way, and then bring them back in, also in an automated way. An ugly (but functional) solution would be to replicate my actions of creating new documents and pasting them temporarily there, but I don't think Photoshop allows you to do things outside of your current document with an action. It seems to me that the only way to do what I want is to use the "hack" of incorporating the clipboard into the action as a clever hack, but that leaves me stuck as I have two more layers that can't fit onto that same clipboard. Help or suggestions would be appreciated. I can keep on doing it manually, but to have a comprehensive action would save me a ton of time.

    Read the article

  • ssh, "Last Login", `last` and OS X

    - by allentown
    I have hit the googles as much as I can on this, being specific to OS X, I am not finding an answer. Nothing is wrong, but curiosity levels are high. $ssh [email protected] Password: Last login: Wed Apr 7 21:28:03 2010 from my-laptop.local ^lonely tylenol^ Line 1 is my command line 2 is the shell asking for the password line 3 is where my question comes from line 4 comes out of /etc/motd I can find nothing in ~/ of an of the .bash* files that contains the string "Last Login", and would like to alter it. It performs some type of hostname lookup, which I can not determine. If I ssh to another host: $ssh [email protected] Last login: Wed Apr 7 21:14:51 2010 from 123-234-321-123-some.cal.isp.net.example hi there, you are on box 456 line 1 is my command line 2 is again, where my question comes from line 3 is from /etc/motd *The dash'd IP address is not reversed On this remote host, I have ~/.ssh and it's corresponding keys set up, so there was no password request Where is the "Last Login:" coming from, where does the date stamp come from, and most importantly, where does the hostname come from? While on [email protected] (box 456) $echo hostname remote.location.example456.com Or with dig, to make sure I have rDNS/PTR set up, for which I am not authoritative, but my ISP has correctly set... $dig -x 123.234.321.123 PTR remote.location.example456.com or $dig PTR 123.321.234.123.in-addr.arpa. +short remote.location.example456.com. my previous hostname used to be 123-234-321-123-some.cal.isp.net.example, which I set with hostname -s remote.location.example456.com, because it was obnoxious to see such a long name. That solves the value of $echo hostname which now returns remote.location.example456.com. Mac OS X, 10.6 is this case, does seem to honor: touch ~/.hushlogin If leave that file empty, I get nothing on the shell when I login. I want to know what controls the host resolution of the IP, and how it is all working. For example, running last reports a huge list of my logins, which have obtusely long hostnames, when they would be preferable to just be remote.location.example456.com. More confusing to me, reading the man page for wtmp and lastlog, it looks like lastlog is not used on OS X, /var/log/lastlog does not exist. Actually, none of these exist on 10.5 or 10.6: /var/run/utmp The utmp file. /var/log/wtmp The wtmp file. /var/log/lastlog The lastlog file. If I am to assume that the system is doing some kind of reverse lookup, I certainly do not know what it is, as it is not an accurate one.

    Read the article

  • Sendmail yields "user unkown" errors even after (wrongly) setting up a catch-all account

    - by user59240
    I was trying to follow the instructions found here to set up a catch-all account, but still I get the following message for mails sent to non-existent users: The error that the other server returned was: 550 550 5.1.1 [email protected]... User unknown (state 14). Everything else works, though... /etc/mail/local-host-names and /etc/mail/virtusertable were set up as instructed. Any advice? Thanks!

    Read the article

  • Self Ad serving for Linux server

    - by protecttheweb
    I am looking for ad serving software to serve my own ads. I am a newbie and just want your recommendation. Yes, I know about Google DFP but I want something non Google or something open source or for Linux servers. I want something kind of automatic like advertisers add the banner images or test ads and pay and ads are automatically served or can be set to draft until set to live. What recommendation do you have?

    Read the article

  • DNS Update 'stuck'

    - by Postus
    I have a dedicated server with Debian and ISPconfig on it And a domain example.it I did a error while setting the dns for the first time I've tried to set it to ns1.example.it IP.IP.IP.IP myDedicnumber.kimsufi.com IP.IP.IP.IP But it got stuck it shows "Current Status: on hold" its been 7days already. I've tried contacting ovh but they just told me to set my dns to something diffrent but I can't as this operation is blocking any changes to the DNS records. Is there anything that I could do accept bug OVH?

    Read the article

  • Printer in a AD double side print problem

    - by Spidfire
    ive got a printer in my Active directory but its standard set to double sided printing but the problem is the printer doesnt support that so you have to switch it manualy Ive found the setting for the user but it is automatically set to the original value if you reboot Where can i find the setting in the active directory ? the printer is a :HP Color LaserJet CP1510 Series PCL 6 (its possible that there is a script for this but i dont know where to look)

    Read the article

  • nginx to lighttpd detecting request headers

    - by A.Jesin
    I'm moving a site form Nginx to Lighttpd. I was able to move everything except these nginx rules set $enc_type ""; if ($http_accept_encoding ~ gzip) { set $enc_type .gzip; } if (-f $request_filename$enc_type) { rewrite (.*) $1$enc_type break; } I think I can create the variable like this var.enc_type = "" in lighttpd but how do I check if the request header Accept-Encoding contains gzip

    Read the article

  • Local dns for testing websites using mobile devices

    - by Morpheu5
    Hi. I have no idea where to start from so sorry in advance if this topic has already been discussed. I usually develop web sites using my laptop as a development server, and recently I needed to test a web site using various mobile devices that can connect via wifi. Having no real AP, I set up a ad-hoc network using my laptop's wireless card and the devices can correctly browse the Internet and access the laptop's web server. The setup is as follows: subnet: 192.168.1.0/24 gateway to the Internet (wired adsl router/modem): 192.168.1.1 laptop: 192.168.1.64 (eth0, wired if connected to the gateway) and 192.168.1.32 (eth1, wifi if somewhat bridged to eth0) mobile devices (same for all, I only use one of them at any time for simplicity): 192.168.1.11 with default gw 192.168.1.1 Now, if I open either 192.168.1.32 or 192.168.1.64 from the mobile devices, I correctly get the default host of my Apache configuration. However I usually work with virtual hosts for many practical reasons, one of which being Drupal's peculiar implementation of multi-sites. For those who don't know how this works, Drupal takes the request's hostname and searches into its sites/ subdirectories for an appropriate configuration file. So, for example, suppose I request www.example.com, then Drupal would search for a config file in the following directories: sites/www.example.com/ sites/example.com/ sites/com/ sites/default/ So I decided to adopt the following style of virtual hosts: if the website I'm working on will be accessible using www.example.com I set up a sites/www.example.com/ directory and create a virtual host for local.www.example.com so Drupal have no trouble finding it. I've been told this is suboptimal from a dns point of view since I'd have to create an authoritative entry for example.com and turn Bind on only when I'm supposed to access the local copy, which is weird. However, if this is the only path I can follow, I still have some problems with Bind's configuration, as I couldn't find any guide that tells me in a clear, noob-friendly way, how to set up such an entry. On the other hand, I was wondering if I could set up an authoritative entry for local, so I could access www.example.com.local and tell in some way (which I don't even know if this is possible) Apache to put www.example.com instead of www.example.com.local in the relevant environment variable. Anyway, I have a last problem, sort of: when I launch Bind in debug mode with high verbosity, and make 192.168.1.32 as the primary dns for the devices, the output doesn't say anything about requests being made from the devices to Bind, so I'm not even sure it comes into play. As you can see, I'm a complete noob at these matters, but I'm eager to learn, so any help/pointer will be appreciated.

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >