Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 203/1030 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Cannot Change "Log on through Terminal Services" in Local Security Policy XP from Server 2008 GP

    - by Campo
    This is a mixed AD environment, Server 2003 R2 and 2008 R2 I have a 2003 AD R2 and a 2008 R2 AD. GPO is usually managed from the 2008 R2 machine. I have a RD Gateway on another server as well. I setup the CAP and RAP to allow a normal user to log on to the departments workstation. I also adjusted the GPO for that OU to allow Log on trhough Remote Desktop Gateway for the user group. This worked on my windows 7 workstation. But unfortunately the policy is a different name in XP "allow log on through Terminal Services" I can get through right into the machine but when the log on actually happens to the local machine i get the "Cannot log on interactively" error. This is set in (for the local machine) Secpol.msc Local Security Policy "user rights assignment" but is controlled by the GPO in Computer Configuration Policies Security Settings Local Policies "User Rights Assignment" Do I simply need to adjust the same setting on the same GPO but with a server 2003 GP editor? Feel like that could cause issues... Looking for some direction. Or if anyone has run into this issue yet. UPDATE Should this work? support.microsoft.com/kb/186529 Still seems like I will have the issue as the actual GP settings for Log on through Terminal Services is still different between Server 2008 R2 and 2003 R2.... Another Thought: Should I delete the GPO made for the department and remake it with the 2003 R2 server? I have no 2008 specific settings as the whole department runs XP other than myself. If that's a solution I will move my computer out of the department as a solution... Thoughts?

    Read the article

  • How can I filter /var/adm/wtmpx on Solaris 10?

    - by Yanick Girouard
    Some of our Solaris 10 servers are monitored using SiteScope, which uses Telnet to probe certain ports (SSH is one of them) every few minutes. This is creating an insane amount of lines in /var/adm/wtmpx, and eventually make it so big (2,5G+) that we can no longer run the last command, or that the uptime command is unable to accurately show the true uptime of the server. The error we get when trying to run the last command is this: /var/adm/wtmpx: Value too large for defined data type I have found ways we can clean this accounting log using a cron job (with the command /usr/lib/acct/fwtmp), and this works. This is not the issue. I was wondering if there would be a way to simply prevent connections from the monitoring user (in our case, user monsite) from creating entries in this accounting log at all. Is this possible, and if so, how can I do it? I've looked around and searched Google for a while, but couldn't find an answer to this question. NOTE: We are very well aware that the monitoring solution we employ is perhaps not the best one, but we cannot change it at this time. Therefore, suggesting that we change it is not pertinent to this question. If you want to read more on the Sitescope monitoring solution we employ for those servers, please see its documentation here and look for Port Monitor, and Connecting to remote UNIX servers, which explains how it works.

    Read the article

  • Setting Ubuntu Global PATH for Ruby Enterprise Edition

    - by Wally Glutton
    Context: I recently installed Ruby Enterprise Edition (REE) on an Ubuntu 8.04 server. I would like for this new version of Ruby to globally supersede (for all users, crontabs, etc) the older version in /usr/local/bin. Attempted Solution #1: The REE documentation recommends placing the REE bin folder at the beginning of the global PATH in /etc/environment. I altered the PATH line in this file to read: PATH="/opt/ruby_ee/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" This did affect my PATH at all. Attempted Solution #2: Next I followed these instructions and updated the PATH setting in the /etc/login.defs and /etc/crontab files. (I did not change /etc/sudoers.) This didn't affect my PATH either, even after logging out and rebooting the server. Other information: I seem to be having the same problem described here. I'm testing using the command: echo $PATH My shell is bash. My .bashrc doesn't not alter my PATH. I'm ssh'ed into the system for all testing. /opt/ruby_ee/ is a sym-link to /opt/ruby-enterprise-1.8.7-2011.03/

    Read the article

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

  • mysql_tzinfo_to_sql missing on my system

    - by Sk1ppeR
    I ran into problem with timezones within MySQL. Long story short, my application is worldwide, and each database has it's own timezone set within the application (not the server) in the way of "Europe/Berlin", "Europe/Vienna", "America/Sao Paulo". Obviously this is unacceptable for MySQL at first per connection. I read that it handles data better if you use UTC offsets. Basically my goal is to log a field's alteration in another table using a trigger. For that I use UNIX_TIMESTAMP within the trigger. Although UNIX_TIMESTAMP() follows the global timezone for the server which obviously bothers me a lot :| So I went to search for a "per connection" solution to use inside the trigger and well I found that mysql_tzinfo_to_sql can actually import zone info (UTC offsets) from my linux's zoneinfo files. Although to my amuse, when I ran the commant I got the following: bash: mysql_tzinfo_to_sql: command not found So I'm looking for a solution to fix that. I don't want to "map" the timezone names into UTC offset just so I could use in the trigger. Is there an alternative tool? Or at least sources for this one in particular only? What kind of queries does this tool generates so I could do it manually then if there is no alternative tool. Thanks in advance on any help on the issue! P.S: The OS is Debian GNU/Linux 6.0 and the MySQL server is the one from aptitude with performance tweaks with my.cnf

    Read the article

  • Disabling mouse acceleration in Mac OS X

    - by aib
    I've been looking for a solution to the unusable mouse problem in Mac OS X for ages. I've tried a gazillion programs and fiddled with every setting there is or there can be added. So far, I haven't found a way to get linear mouse response in Mac OS X. At this point I'm seriously considering installing another operating system. But before I do that, or go hacking around OS binaries, maybe someone here has a solution? I want linear mouse response. I want high sensitivity. I like my touchpad acceleration and would like to keep it if possible. Any ideas? P.S. I've been at this for a long time, I'll probably have already tried the most popular answers. I'm running Mac OS X 10.6.5 on a MacBook Pro. I don't use a particular brand of mouse. I'm not looking for any commercial solutions. I've tried: Mouse Acceleration Preferences Pane, the Snow Leopard version of which can get me close to a linear response, but at the cost of tracking speed (sensitivity). Answers on this question: Make Mac OS X mouse acceleration more Windows-like About every code snippet I found via Google.

    Read the article

  • BitLocker with Windows DPAPI Encryption Key Management

    - by bigmac
    We have a need to enforce resting encryption on an iSCSI LUN that is accessible from within a Hyper-V virtual machine. We have implementing a working solution using BitLocker, using Windows Server 2012 on a Hyper-V Virtual Server which has iSCSI access to a LUN on our SAN. We were able to successfully do this by using the "floppy disk key storage" hack as defined in THIS POST. However, this method seems "hokey" to me. In my continued research, I found out that the Amazon Corporate IT team published a WHITEPAPER that outlined exactly what I was looking for in a more elegant solution, without the "floppy disk hack". On page 7 of this white paper, they state that they implemented Windows DPAPI Encryption Key Management to securely manage their BitLocker keys. This is exactly what I am looking to do, but they stated that they had to write a script to do this, yet they don't provide the script or even any pointers on how to create one. Does anyone have details on how to create a "script in conjunction with a service and a key-store file protected by the server’s machine account DPAPI key" (as they state in the whitepaper) to manage and auto-unlock BitLocker volumes? Any advice is appreciated.

    Read the article

  • Using SQL to join spreadsheets in excel

    - by toms
    Based on the explenation here: How do I join two worksheets in Excel as I would in SQL? I tried to join to excel sheets from different files into the same sheet. However, I keep getting this error message when I try to refresh the table: [MICROSOFT][OBDC Excel Driver] Too few parameters. Expected 5. The SQL queries i've put in so far were: SELECT `Sheet1$`.ID, `Sheet1$`.Name, `Sheet1$`.`L Name` FROM `C:\Users\Tom\Book1.xlsx`.`Sheet1$` a LEFT JOIN `C:\Users\Tom\Book2.xlsx`.`Sheet1$` b ON a.col2= b.col2 and SELECT `Sheet1$`.ID, `Sheet1$`.Name, `Sheet1$`.`L Name` FROM `C:\Users\Tom\Book1.xlsx`.`Sheet1$` a LEFT JOIN `C:\Users\Tom\Book2.xlsx`.`Sheet1$` b ON a.`ID`= b.`ID` and SELECT * FROM `C:\Users\Tom\Book1.xlsx`.`Sheet1$` a LEFT JOIN `C:\Users\Tom\Book2.xlsx`.`Sheet1$` b ON a.`ID`= b.`ID` and a few combinations and alterations. I can't seem to find the solution. I've learned that it definitely doesn't like the SELECT *. But I can't fix it. Can anyone suggest any solution?

    Read the article

  • Backup server (OSX) like time machine to backup remote ubuntu 12.04 server [on hold]

    - by Mad
    I've searched my ass of for an good solution to backup my ubuntu server thats in a datacenter. Local we have an osx server with some external drives attached to it. This is for the local working stations that handle timemachine. What i like to do is fetch the files (or mount the root of my ubuntu server) and make an time machine backup from it. I just have one problem that if my osx server crashes i can't put back the system because it contains not only the osx server but also the ubuntu server from the data center. I've used Back in time on ubuntu to do the exact same thing but this was to Ubuntu (local) from Ubuntu (datacenter). So does anybody has an solution? Here are my requirements: Set time intervals for backups; need to be backed up nightly. Set time intervals for keeping backups; hourly, weekly, monthy etc Able to back up all computers and servers from an offsite location the local osx server (10.9). Manageable from that one location to login with ssh to do rsync or rsnapshot Has a GUI (osx) Act like time machine, backup only the files that has been changed. Restore to a point back in time.

    Read the article

  • atl90.dll version 9.0.30729.4148 is missing in WinSxS folder

    - by mkva
    I have the following problem: when starting Visual Studio 2008, it says "Cannot find one or more components. Please reinstall the application." and stops. With the help of Sysinternals ProcessMonitor, I found out that Visual Studio could not load the atl90.dll 9.0.30729.4148 from the WinSxS folder. I tried to manually copy the older atl90.dll 9.0.30729.1 with the result that Visual Studio works again. Now I call this a dirty workaround, and not a solution. Plus I still don't know the reason why the atl90.dll disappeared in the first place. So my questions: - Does anyone know of a reason why this might have happened? - Does anyone know a real solution to the problem, e.g. a Microsoft download that includes the atl90.dll in the correct version 9.0.30729.4148 that installs into WinSxS? Some details: - WinXp SP3 - missing DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.4148_x-ww_353599c2\atl90.dll - workaround DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_d01483b2\atl90.dll - manifests in WinSxS seem to be alright, but unfortunately all point to the missing version 9.0.30729.4148 Thanks, Markus

    Read the article

  • atl90.dll version 9.0.30729.4148 is missing in WinSxS folder

    - by mkva
    Hi I have the following problem: when starting Visual Studio 2008, it says "Cannot find one or more components. Please reinstall the application." and stops. With the help of Sysinternals ProcessMonitor, I found out that Visual Studio could not load the atl90.dll 9.0.30729.4148 from the WinSxS folder. I tried to manually copy the older atl90.dll 9.0.30729.1 with the result that Visual Studio works again. Now I call this a dirty workaround, and not a solution. Plus I still don't know the reason why the atl90.dll disappeared in the first place. So my questions: - Does anyone know of a reason why this might have happened? - Does anyone know a real solution to the problem, e.g. a Microsoft download that includes the atl90.dll in the correct version 9.0.30729.4148 that installs into WinSxS? Some details: - WinXp SP3 - missing DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.4148_x-ww_353599c2\atl90.dll - workaround DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_d01483b2\atl90.dll - manifests in WinSxS seem to be alright, but unfortunately all point to the missing version 9.0.30729.4148 Thanks, Markus

    Read the article

  • Digital Asset Management, iPhoto / Aperture server... alternative

    - by Sisyphus
    Afternoon, Clients, 10 : All Apples running either Leopard or Snow Leopard Server : Snow Leopard server, (and I have a old Dell Poweredge 650 at home running Gentoo 2.6, if anybody as a Linux solution). The situation: I work in small design company with 8 people, at present we are looking to consolidate all our image files onto one location, at present we each use our preferred single user DAM solution, be it, Adobe Bridge, iPhoto/Aperture (some don't bother at all) The filetypes commonly used are .psd, .pdf, .eps, .tiff, .jpg and RAW image files. Ideally what is needed: Centralised on one server, but allows us to search via spotlight (not essential, but would be nice) Include searchable metadata information such as date, location, title Open-source or as low cost as possibly Allow simultaneous users to import files So far, I have looked at a few open source DAM, systems, such as Razuna, Gallery (not strictly DAM), ResourceSpace, Notre-DAM, while these are brilliant and open-source, they don't integrate as smoothly with the Desktop as iPhoto and aperture. For iPhoto and aperture, I have tried creating a Shared library on the server (a tad laggy), and also using a drive with no permissions, put a library and letting each client read from it, however if they want to put images onto the library only, it's only supports one user at a time writing to the library... Any ideas what could fulfill our needs? Or is it time to bite the bullet for FinalCut Server? Thanks in advance.

    Read the article

  • Force database read to master if slave data is stale

    - by Jeff Storey
    I previously asked a specific question about this database replication for new user signup to which I got an answer, but I want to ask this in the more general sense. I have a database setup in which I am using a master/slave combination. I am using the slaves for load balancing (the data itself is partitioned/sharded across multiple databases, but each database has X slaves for load balancing). Let's say I write some data to the master. Now I do a subsequent read which hits a slave, but the slave has not yet caught up to the master. Is there a way (which can be done quickly since it will happen frequently) to determine if the data is stale in the slave so I can then route to the master? In my previous question, it was suggested to do simultaneous writes to the cache and the database. This solution seems practical, but there is still a chance that the data may have been removed from the cache but not yet updated in the slave. A possible solution is to ensure the cache is big enough (based on the typical application load) so the data will not be evicted within the time frame it takes to replicate the data. This seems like it may be feasible. Can anyone provide additional insight into this question? Thanks!

    Read the article

  • How To Remove Bottleneck with Squid Caching Proxy

    - by Volomike
    I'm more of a LAMP web developer trying to help the sysop. When I joined a project, I inherited some old PHP spaghetti code. Some of that code is that it goes out to a third-party website (let's call it thirdparty.com) and pulls down content with an HTTP-GET request. Unfortunately, the way the code is designed, it needs to do this several times a minute. When we looked at the bottlenecks on the server with 'netstat -a', we saw that connections to thirdparty.com were constantly running when this content would be plenty fine to be gathered once a day. What I need to know is if the Squid Proxy Caching Server is the solution we need? I'm guessing that this might let us have it pretend to be thirdparty.com on the network. If the web server needs to query thirdparty.com, it hits Squid instead. Squid can then determine whether it needs to supply content from cache or if it needs to go to thirdparty.com for fresh content. Is this the solution we need? And second, is this easily configured and only to cache thirdparty.com requests?

    Read the article

  • How To Fix Samba File Permission Issues in Mac OSX

    - by user1867768
    I've had this problem for a long time, here is the basics of it... I use a mixed environment of Windows 7/8 computers with Mac OSX Lion/Mountain Lion. Whenever a Windows computer creates a file on a SMB share on the Mac it no longer has group permissions, only the person who created or updated it can access it. My solution has been to go onto the Mac system and reset permissions for the entire directory structure then everyone can see it again. About the only thing on this that I can find was for OSX pre Snow Leopard that mentioned editing the SMB.CONF file to fix their particular problem (similar to mine, http://www.gladsheim.com/blog/2009/09/19/osx-leopard-and-samba-permissions/). The problem is that now Lion and Mountion Lion no longer have an SMB.CONF file (another web search pointed to the com.apple.smbd.plist (http://kidsreturn.org/?s=smb.conf) but it's an XML file now and I'm not clear on what should be done to THAT to fix the problem. So, short of me writing an Applescript to run every hour to fix permissions, does anyone know a solution to this very frustrating problem? Thank you in advance for any advice or solutions you can offer!

    Read the article

  • Ubuntu 12.04 - Pound Reverse Proxy and Adobe Flex/Flash Auth

    - by James
    First time posting, I have a completely fresh install of ubuntu 12.04 Client as a reverse proxy gateway to our internal network. Our setup is we have one external ip but three domains we would like to point to various webservers on our internal network. It's not so much a load balancing issue or cacheing etc. Merely routing some Client browsers to a port 80 webpage (to adhere to some stricter corporate policies regarding placing port numbers after domain names). I have gone with pound and everything seems to be working fine. Static pages load etc. Everything is good with the exception of a Flash/Flex based WebClient for a Digital Asset Management program. The actual static page loads fine, it is just at the moment of entering credentials, be they correct or incorrect, and hitting login, there is no response whatsoever. Either a rejection or confirmation etc. So the request back to the internal server can't be getting through. I have googled extensively and there might be a solution in a crossdomain.xml file? Documentation isn't very clear. And we are not the authors of the DAM app, and have no control over the code on the Flash/Flex side. Questions: Is there a particular config file/solution for pound that allows Flash/Flex auth information to be forwarded? Is there another reverse proxy program (nginx?)that allows this type of config? Am I looking at this the entire wrong way, should Flash/Flex fundamentally not be allowed to have this access?

    Read the article

  • How can I stream audio signals from various devices/computers to my home server?

    - by Breakthrough
    I currently have a headless home server set up (running Ubuntu 12.04 server edition) running a simple Apache HTTP server. The server is near an audio receiver, which controls a set of indoor and outdoor speakers in my home. Recently, my father purchased a Bluetooth adapter, which our various laptops and cellphones can connect to, outputting the music to the speakers. I was hoping to find a solution that worked over Wi-Fi, namely because it won't cost anything (I already have a server with an audio card), and it doesn't depend on Bluetooth. Is there any cross-platform (preferably free and open-source) solution that I can use which will allow me to stream audio to my home server, over my home network, from a wide variety of devices (laptops running Windows/Linux or cellphones running Android/BB/iOS)? I need something that works at least with Windows and Android. Also, just to clairfy, I want something that simply allows devices to connect to my server and output an audio signal without any action on the server end (since it's a server hidden away near my receiver). Any subsequent connection attempt should be dropped, so only one device can be in control of the stereo at once.

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • IDE/PATA high-speed hard drive dock

    - by wfaulk
    I frequently need to access bare drives for backups and need a quick, high-speed way to deal with them. There are a multitude of SATA hard drive docks (for example), but I have a lot of IDE/PATA (hereafter "IDE") drives that I would like to be able to use similarly. There are IDE-to-SATA adapters so you can plug your IDE hard drive into a SATA port, so I don't see any reason why you couldn't use the same technology to have a native dock, yet none seems to exist. Now, I'm aware that 3.5" IDE drives do not have a specification for the layout of the connector, and therefore can't be slapped into a dock the same way a SATA drive could, but 2.5" PATA drives do. In fact, I'm not terribly interested in supporting 3.5" drives. It would be nice, but I deal with them far less frequently than 2.5" drives. Also, I'd very much like for the connection to the computer be faster than USB, preferably eSATA, I don't want to be spending time mounting a drive inside an enclosure, I don't want bare drives lying around with a cable hanging off of them, and I'd prefer a single dock rather than two. What seems like the ideal solution to me would be a regular SATA→eSATA dock and some sort of screwless adapter for IDE drives, but I'm open to any suggestions, regardless of my stated preferences, but which are, in some sort of order of preference: high-speed (faster than USB, at least) holder for drive (not just a cable) no complicated enclosure support for 3.5" IDE drives single dock Updates: Here's a 3.5" IDE to 3.5" SATA docking adapter that could be part of the solution. Weird. I figured that would be the impossible part. I was hoping to find something like this 2.5" to 3.5" SATA chassis that would take a 44-pin IDE drive internally. It looks like the Vantec EZ Swap EX comes awfully close. It has its own bay dock, but it looks like the SATA ports on the back are spaced properly, even if they're not aligned quite properly. Unfortunately, the proper position is at the very edge of the drive, which means that the docks' connectors are at the very edge of their recesses, which means there's no way to fit it in there.

    Read the article

  • Squid, authentication, Outlook Anywhere, Windows 7 and HTTP 1.1 = NIGHTMARE

    - by Massimo
    I'm running a Squid proxy (latest version, 3.1.4) on Linux CentOS 5.4 with Samba 3.5.4, in order to allow authenticated web access for domain users; everything works fine, and even Windows 7 clients are fully supported. Authentication is transparent for domain users, while it is explicitly requested for non-domain ones, and it works if the user can provide valid domain credentials. All nice and good. Then, Outlook Anywhere kicks in and pain and suffering ensue. When Outlook (be it 2007 or 2010, it doesn't matter) runs on Windows XP clients, it connects gracefully through the Squid proxy to its remote Exchange server. When it runs on Windows 7, it doesn't. If the authentication requirement is lifted from the proxy, everything works on Windows 7 too, so the problem is obviously related to NTLM authentication with Squid. Digging more deeply (WireShark), I discovered Outlook Anywhere uses HTTP 1.1 when it runs on Windows 7, while it uses HTTP 1.0 when on Windows XP. And it looks like Squid, even in its latest incarnation, still has some serious troubles handling HTTP 1.1 properly, particularly when SSL and proxy authentication are thrown in the mix. While waiting for Squid to fully and officially support HTTP 1.1 (and it looks like this could take quite a long time), I'm looking for one of the following solutions: Make Squid handle this correctly, if it is at all possible. Identify Outlook Anywhere connections and have Squid not require authentication for them. But it isn't easy: again, the behaviour of Outlook differs when running on Windows XP and Windows 7, and while on Windows XP Outlook sends a really nice user-agent string of "MSRPC", on Windows 7 it doesn't send any (why? WHY?!?). Force Outlook Anywhere to use HTTP 1.0 even when running on Windows 7. And no, this is not as simple as deselecting "use HTTP 1.1" in Internet Explorer, looks like Outlook ignores that setting and chooses on its own which protocol to use. Any other feasible solution which doesn't involve whitelisting specific destination Exchange servers, which is the last-resort solution I'm trying to avoid.

    Read the article

  • applying rules to CC'd messages in Outlook 2007

    - by Danny Chia
    This is probably a silly question, but here goes: I have two e-mail aliases that forward messages to my main address. I'm trying to create a rule to move all messages that I receive to a specific folder. There is a condition that applies to messages "where my name is in the To or Cc box," but it doesn't let me specify what "my name" is. Not surprisingly, it only affects messages that have not been sent to an alias. So far, I found a solution as follows: I select the condition that applies to messages with specific words in the recipient's address, and I enter my address and aliases as those "words." It's kind of an awkward hack, but it works. Normally, this wouldn't be much of an issue, but I have a "family computer" that is shared among my parents and myself, and I don't want their e-mails and mine to be jumbled together in the Inbox. So my questions are: Is there a solution that is less awkward than the one I used? Alternatively, is there a way to assign multiple e-mail addresses (or aliases) to one account? Thanks!

    Read the article

  • Managing Apache to Compensate for WebDAV's Security Masking

    - by Tohuw
    When a user creates a file via WebDAV, the default behavior is that the file is owned by the user and group running the Apache process, with a umask of 022. Unfortunately, this makes it impossible for unprivileged users to write to the files by other means without being a member of the group Apache runs under (which strikes me as a particularly bad idea). My current solution is to set umask 000 in Apache's envvars and remove all world permissions from the webdav parent directory for the user. So, if the WebDAV share is /home/foo/www, then /home/foo/www is owned by www-data:foo with permissions of 770. This keeps other unprivileged users out, more or less, but it's hokey at best and a security disaster awaiting at worst. From my research and poking around at mod_dav and Apache, I cannot find a reasonable solution short of a cron job flipping all the permissions back (I'd rather not have the load and increased complexity on the server). SuExec won't work, either, because WebDAV operations are not going to execute as a different user. Any thoughts on this? Thank you.

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • 'txn-current-lock': Permission denied [500, #13] - Subversion + Apache Configuration Issue

    - by wfoster
    Current Setup Fedora 13 32bit Apache 2.2.16 Subversion repositories setup under /var/www/svn I have two different repositories under this directory so my /etc/httpd/conf.d/subversion.conf setup in this way; LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /svn> DAV svn SVNListParentPath on SVNParentPath /var/www/svn <LimitExcept GET PROPFIND OPTIONS REPORT> AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/httpd/.htpasswd Require valid-user </LimitExcept> </Location> After copying over my repos and using; chmod 755 -R /var/www/svn chcon -R -t httpd_sys_content_t /var/www/svn chown apache:apache -R /var/www/svn I can browse my repos fine through the browser, and I can update all my working copies, however when I try to check in from anywhere I get the same error Can't open file '/var/www/svn/repo/db/txn-current-lock':Permission denied I have been working on this issue for a while now and cant seem to find a solution to my issues. It might be of some use to know that the repo existed on a different server before this, it has been now moved to this new server. Everything I have read seems to indicate that the permissions for apache are incorrect, however apache is set to run as User apache and Group apache. So as far as I can tell my setup is correct. The behavior is not though. Any Ideas? Solution The only way I was able to get this to work is to disable SELinux, it could also be done by setting the proper booleans with SELinux via setsetbool and getsebool since this is just a home server, I decided to disable SELinux and am reaping the benefits now.

    Read the article

  • How can I download a file directly to a web server?

    - by matt
    I'd like to download files directly to a hosted server, whether it's one I set up myself or a hosted service like Dropbox. For example, when I download a podcast, instead of downloading it to my computer then uploading it to the server, how can I have it download directly to the cloud. My interest here is reducing the traffic I'm using over a metered data plan on my laptop, so I don't want my computer acting as a physical intermediary caching the file. Ideally, there would be some way for me to have a download link and tell it to go directly to my server. How can I accomplish this? I realize that this question is potentially involving a "webapp" and it is potentially involving "server administration" and since my goal is to cut my computer out of the loop I can see people saying this is off-topic and should be on another site. My issue is this: I don't know if this is going to be a webapp solution or a server solution but I do know regardless I'm going to be using a computer to get it done and I am replacing a function that's currently done on my computer so I figured I'd ask it here. If I was wrong and this definitely should be at webapps feel free to let me know or just migrate it.

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >