Search Results

Search found 20289 results on 812 pages for 'service locator'.

Page 698/812 | < Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >

  • What are the "least legally restrictive" well-connected countries to host a website?

    - by monster
    NB: I am aware that this question is subjective, as it can't be defined precisely, but the answers should still be "objective": Country name, and what makes it legally safer. EDIT: A) I am located in Germany. B) I am NOT looking for a place to offer pirated Software/Media; no binary on my site, except "profile icon". Hello! I want to start publishing "social" websites / apps, and I found that the biggest initial problem is this: Any and all services I have to depend on, including Domain Registrar, DNS provider, Server/Cloud Provider, CDN Provider, ... even my Insurance Agent, basically say that they can "throw me out" if my website contains "unacceptable" content. It's always phrased in such a way that basically anything can fall under "unacceptable" content. This is very frustrating because you just can't fully control what users post on your "social website", and you so you basically have to expect when you go to bed that your site is going to be gone when you wake up. I've heard a lot of horror stories about this. Since the "Terms Of Service" of all those providers are foremost to protect themselves from legal actions, and those legal actions depend on the country where they are located, it seems like the first step is to find which country is the "safest" to locate a site. "Safest" being defined as, where I am least likely to get in legal trouble with the local authorities, if some user posts something unacceptable in some way. The main restriction is that it should also be a "well-connected" country, because there is no point in being "safe", if my users can't get to my sites, or the latency is unacceptable. I am targeting the English speaking people in any country as my future users.

    Read the article

  • Davical + LDAP + NTLM

    - by slavizh
    I have set up a Davical server on CentOS. I've configured it to use LDAP and the users use their usernames and passwords to authenticate to the Davical server. I am using Lightning as client software for calendaring. Using Lightning requires entering username and password everytime, so I decided to set NTLM. I want my users who are logging in the domain to use the calendar server trough Lightning without entering username and password. I've set up NTLM on the Davical server. But when a user trys to reach the calendar trough Lightning first the server asks for NTLM username and password and then ask for the LDAP username and password. It becomes something like double authentication. The problem is that NLTM requires domain\username and passowrd and Davical trough LDAP requires only username and password. So my questions are: Is there a way to change something in Davical so that Davical trough LDAP to requires domain\username and passwords authentication? That way may be trough NTLM the second authentication will proceed silently and the users will user Lightning without entering usernames and passwords Is there a way I can make this double authentication to become one and to use only NTLM? P.S. We have Samba domain with LDAP server and our users use Thunderbird for their mail and I want to put Lightning too. That way they will have calendar service. But I don't want they to enter username and password for the calendar every time they log in. I know they can save that password but that is not an option for my organization.

    Read the article

  • Exchange 2010 issuing NDRs to Hotmail/Live & few other domains on receipt of message

    - by John Patrick Dandison
    I'm working through a beast of an issue at the moment. Exchange 2010 single server on prem Hybrid deployment to Office 365 ESMTP filtering turned off on ASA Certain domains (most consistently, Hotmail/Live) cannot send us mail. At one point, we couldn't send out either, but I created a new Send Connector that forces HELO instead of EHLO. I turned on SMTP logging, an example of the failed inbound message connection is below. I've read that it could be that reverse DNS is the problem, i.e., the exchange banner smtp address needs to reverse-DNS back to the same IP. Since it's the default exchange connector, its banner is the server's name, but the DNS name of the MX record is different. I'm waiting for the PTR records to update to reflect the internal name as well. Is that the right direction? Is this all DNS or something different? SMTP Session Log (single failed session for illustration): SMTPSubmit SMTPAcceptAnySender SMTPAcceptAuthoritativeDomainSender AcceptRoutingHeaders 220 ExchangeServerName.internalSubDomain.example.com Microsoft ESMTP MAIL Service ready at Mon, 15 Oct 2012 09:57:24 -0400 EHLO col0-omc3-s4.col0.hotmail.com 250-ExchangeServerName.internalSubDomain.example.com Hello [65.55.34.142] 250-SIZE 250-PIPELINING 250-DSN 250-ENHANCEDSTATUSCODES 250-STARTTLS 250-X-ANONYMOUSTLS 250-AUTH NTLM LOGIN 250-X-EXPS GSSAPI NTLM 250-8BITMIME 250-BINARYMIME 250-CHUNKING 250-XEXCH50 250-XRDST 250 XSHADOW MAIL FROM:<[email protected]> 08CF5268DABBD9AA;2012-10-15T13:57:24.564Z;1 250 2.1.0 Sender OK RCPT TO:<[email protected]> 250 2.1.5 Recipient OK XXXX 1282 LAST Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command XXXXXXXXX from COL002-W38 ([65.55.34.135]) by col0-omc3-s4.col0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command " XXXX 15 Oct 2012 06:57:24 -0700" Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command XXXXXXXXXXX <[email protected]> Tarpit for '0.00:00:05'

    Read the article

  • Linux wireless disconnect every 20 minutes

    - by james
    My laptop uses CentOS 6.3 with kernel 2.6.32-279.el6.x86_64. My wireless adaptor is Intel Corporation Centrino Wireless-N 1000. My wireless connection always get off after about 20 minutes. The network applet shows the connection is still on with good signal strength, but I just cannot load any web pages even the configuration page of the wireless router. The problem will continue until I disable and reconnect the wireless. Other devices like my cell phone uses the same wireless network without the problem. Even yesterday I'm using the same laptop with Fedora 17 without this problem. I also searched the internet and someone said running services NetworkManager and network simultaneously may be a problem. But I cannot stop any one of them because: if I stop network and start NetworkManager, the network service will start automatically; if I stop NetworkManager and run network, it says "Device does not seem to be present, delaying initialization." when trying to bringing on the wireless. What shall I do to get rid of the problem? Thank you very much!

    Read the article

  • server 2008 r2 stuck on installing updates

    - by volody
    I have 2008 r2 64bit server that is stuck on installing updates. It shows message "Please do not power off or unplug your machine. Installing 57 of 61.." This message is shown for 3 hours now. I was trying to connect to this server using mmc. runas /user:AdministratorAccountName@ComputerName "mmc %windir%\system32\compmgmt.msc" RUNAS ERROR: Unable to run - mmc C:\Windows\system32\compmgmt.msc 1311: There are currently no logon servers available to service the logon request. What else can I do? Update: Remote Desktop does not work. It just disappear after entering username and password from Win7-64 computer Update: I have disconected server. Now Server Manager in Roles Summary shows message "Unexpected error refreshing Server Manager: The remote procedure call failed." Event log shows Could not discover the state of the system. An unexpected exception was found: System.Runtime.InteropServices.COMException (0x800706BE): The remote procedure call failed. (Exception from HRESULT: 0x800706BE) at System.Runtime.InteropServices.Marshal.ThrowExceptionForHRInternal(Int32 errorCode, IntPtr errorInfo) at Microsoft.Windows.ServerManager.ComponentInstaller.ThrowHResult(Int32 hr) at Microsoft.Windows.ServerManager.ComponentInstaller.CreateSessionAndPackage(IntPtr& session, IntPtr& package) at Microsoft.Windows.ServerManager.ComponentInstaller.InitializeUpdateInfo() at Microsoft.Windows.ServerManager.ComponentInstaller.Initialize() at Microsoft.Windows.ServerManager.Common.Provider.RefreshDiscovery() at Microsoft.Windows.ServerManager.LocalResult.PerformDiscovery() at Microsoft.Windows.ServerManager.ServerManagerModel.CreateLocalResult(RefreshType refreshType) at Microsoft.Windows.ServerManager.ServerManagerModel.InternalRefreshModelResult(Object state) Update: Manual update hangs on (Installing update 58 of 62...) Seurity Update for Windows Server 2008 R2 x64 Edition (KB979309) Update: It could be an issue with that admin user was renamed

    Read the article

  • How To Replace Laptop HDD Without Losing Data?

    - by Ishan
    Hello, I recently went to Dell Service center and they tell that HDD is faulty and needs to be replaced. I have a Studio 1457 laptop with 500 GB HDD and don't want to lose the data(purchased in May 2010, still under warranty). I have searched a bit and I think it may be best to use a disk imaging software for this task. However, I don't know about a good software. I have following steps in mind: Get a 1 TB External HDD. Make an image of existing 500 GB HDD and store data on external disk. Install new HDD and install a brand new Windows copy and then install the software on it. Using the same software I used to make image, restore the old HDD image on new one. However, I have some questions in mind. First, is this possible? Second, I live in a country where piracy is a big issue and I am sure the support executive who will come to change HDD will have a pirated copy. But I have genuine Windows 7 Pro and don't want to lose it. Now, Dell does not supply and OS disks, so I can't install it on new HDD! If I follow above steps, which version of Windows 7 will be retained? One in the image(authentic) or one in the new HDD(pirated). I am ready to purchase a good software for this task and my budget is $50-60. Since laptop is under warranty, new HDD will be free. One last thing, I have created a Windows Migration file whose size is 70 GB. Can it be used to move from Windows 7 Pro to Windows 7 Pro?(In case I get a genuine copy of Windows 7!) Any other method to save all the data? Thanks in advance.

    Read the article

  • How can I get Haproxy to not log local requests?

    - by coneybeare
    I am trying to clean out some of the log clutter from my machines and am starting by removing requests that are generated from the server themselves. I have cache warmers running around the clock and I don't want these polluting the logs. I was able to get apache to stop logging local requests by adding a dontlog for the local IP: SetEnvIf Remote_Addr "RE\.DA\.CT\.ED" dontlog CustomLog "|logger -p local3.info -t http" combined env=!dontlog and now I am looking for something similar to put in a configuration for the Haproxy log. How can I prevent 127.0.0.1 requests from writing to the Haproxy log? UPDATE: 2/15/11 I use the excellent loggly service to pull out logs in the cloud, but I am seeing tons of logs like this: 2011 Feb 15 06:09:42.000 ip-10-251-194-96 http: RE.DA.CT.ED - - [15/Feb/2011:06:09:42 -0500] "HEAD /search/Nevad/predictive/txt HTTP/1.0" 200 - "-" "Wget/1.10.2 (Red Hat modified)" 2011 Feb 15 06:09:42.000 127.0.0.1 haproxy[10390]: 127.0.0.1:58408 [15/Feb/2011:06:09:42] www i-5dd7a331.0 0/0/0/8/8 200 210 - - --NI 0/0/0 0/0 "HEAD /search/Nevad/predictive/txt HTTP/1.1" and I want them gone. This question focuses on how to remove that haproxy log line from writing to the server side log in the first place.

    Read the article

  • Redirection of outbound UDP port NTP.

    - by pboin
    For my residential service, I changed ISPs to Zoom/Armstrong. Just after that, my NTP daemons stopped working. I dug deep and diagnosed the problem: Unprivileged ports are getting out. When i run 'ntpdate' for example, I go out on a high, unprivleged port, and get a response on UDP 123. That's fine. The 'ntpd' daemon though, expects to go out on 123 and get its reply there as well. This must be a common problem, because it's directly addressed in the NTP troubleshooting guide. Just to see what would happen, I wrote a detailed email to the general support address at Armstrong. They replied almost immediately with a complete technical answer! They have everything <1024 blocked, except for a few ports to support outbound VPN. So, the question: Can I use IPtables to essentially re-write my outbound UDP 123 up to 2123 or something like that? If I do, does there need to be a corresponding 2123-123 rule to translate the reply? This seems like NAT, but with ports, not addresses. True, I could run ntpdate from cron, but that loses all of the adjustment smarts of NTP.

    Read the article

  • What to look for in a switch with LAN/WAN verses an iSCSI SAN?

    - by Luke
    I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less). I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches? The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN? With the above linked question for the SAN: How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port? With 3 to 6 servers how much buffer cache do you really need? What else should I be looking at? Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <- server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins). We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).

    Read the article

  • [tcpdump] Proxy delegate refusing connexion ?

    - by simtris
    Hi guys, I'm a little disapointed ! My aim was to build a VERY simple smtp proxy under debian to handle mail from a port (51234) and forward it to the standard 25 port. I compile and install a "delegate" witch can handle easily that. It's working very well like that : delegated SERVER="smtp://anotherSmtpServer:25" -P51234 The strange thing is, it's working on my virtual test machine and on the dedicated server in local but I can't manage to use it trought internet. I test it like that. telnet [mySrv] 51234 Of course, no firewal, no deny host, no ined/xined, the service delegated is listening on the right port ... 2 clues : The port is answering trought internet with nmap as "51234/tcp open tcpwrapped" have a look at the tcpdump following : 22:50:54.864398 IP [myIp].1699 [mySrv].51234: S 2486749330:2486749330(0) win 65535 22:50:54.864449 IP [mySrv].51234 [myIp].1699: S 2486963525:2486963525(0) ack 2486749331 win 5840 22:50:54.948169 IP [myIp].1699 [mySrv].51234: . ack 1 win 64240 22:50:54.965134 IP [mySrv].43554 [myIp].auth: S 2485396968:2485396968(0) win 5840 22:50:55.243128 IP [myIp] [mySrv]: ICMP [myIp] tcp port auth unreachable, length 68 22:50:55.249646 IP [mySrv].51234 [myIp].1699: F 1:1(0) ack 1 win 46 22:50:55.309853 IP [myIp].1699 [mySrv].51234: . ack 2 win 64240 22:50:55.310126 IP [myIp].1699 [mySrv].51234: F 1:1(0) ack 2 win 64240 22:50:55.310137 IP [mySrv].51234 [myIp].1699: . ack 2 win 46 The part "auth" seems suspect to me but didn't ring a bell. I could certaily do with some help. Thx a lot !

    Read the article

  • Windows 7 ssh file server.

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • PTR and A record must match?

    - by somecallmemike
    RFC 1912 Section 2.1 states the following: Make sure your PTR and A records match. For every IP address, there should be a matching PTR record in the in-addr.arpa domain. If a host is multi-homed, (more than one IP address) make sure that all IP addresses have a corresponding PTR record (not just the first one). Failure to have matching PTR and A records can cause loss of Internet services similar to not being registered in the DNS at all. Also, PTR records must point back to a valid A record, not a alias defined by a CNAME. It is highly recommended that you use some software which automates this checking, or generate your DNS data from a database which automatically creates consistent data. This does not make any sense to me, should an ISP keep matching A records for every PTR record? It seems to me that it's only important if the IP address that the PTR record describes is hosting a service that is sensitive to DNS being mismatched (such as email hosting). In that case the forward zone would be configured under a domain name (examples follow the format 'zone - record'): domain.tld -> mail IN A 1.2.3.4 And the PTR record would be configured to match: 3.2.1.in-addr.arpa -> 4 IN PTR mail.domain.tld. Would there be any reason for the ISP to host a forward lookup for an IP address on their network like this?: ispdomain.tld -> broadband-ip-1 IN A 1.2.3.4

    Read the article

  • How far should we take the N+N redundancy craziness ?

    - by Brann
    The industry standard when it comes from redundancy is quite high, to say the least. To illustrate my point, here is my current setup (I'm running a financial service). Each server has a RAID array in case something goes wrong on one hard drive .... and in case something goes wrong on the server, it's mirrored by another spare identical server ... and both server cannot go down at the same time, because I've got redundant power, and redundant network connectivity, etc ... and my hosting center itself has dual electricity connections to two different energy providers, and redundant network connectivity, and redundant toilets in case the two security guards (sorry, four) needs to use it at the same time ... and in case something goes wrong anyway (a nuclear nuke? can't think of anything else), I've got another identical hosting facility in another country with the exact same setup. Cost of reputational damage if down = very high Probability of a hardware failure with my setup : <<1% Probability of a hardware failure with a less paranoiac setup : <<1% ASWELL Probability of a software failure in our application code : 1% (if your software is never down because of bugs, then I suggest you doublecheck your reporting/monitoring system is not down. Even SQLServer - which is arguably developed and tested by clever people with a strong methodology - is sometimes down) In other words, I feel like I could host a cheap laptop in my mother's flat, and the human/software problems would still be my higher risk. Of course, there are other things to take into consideration such as : scalability data security the clients expectations that you meet the industry standard But still, hosting two servers in two different data centers (without extra spare servers, nor doubled network equipment apart from the one provided by my hosting facility) would provide me with the scalability and the physical security I need. I feel like we're reaching a point where redundancy is just a communcation tool. Honestly, what's the difference between a 99.999% uptime and a 99.9999% uptime when you know you'll be down 1% of the time because of software bugs ? How far do you push your redundancy crazyness ?

    Read the article

  • How to create a init.d script for openssh-server which was compiled and installed from source using configure + make + make install?

    - by Patrick L
    I have installed openssh-server in my Ubuntu PC using apt-get install openssh-server. The version is 5.9. Now, I would like to compile and install openssh-server version 6.2 from source codes. I have successfully downloaded the source codes, and run the following commands: ./configure make make install I found that the new version of openssh-server was installed into /usr/local/sbin/. The old version of openssh-server is in /usr/sbin/. I found that the service script in /etc/init.d/ssh is still pointing to /usr/sbin/. And the old openssh-server (v5.9) is still running. How can I replace the old openssh-server with the new openssh-server that I have just compiled and installed? How can I create a init.d script to start and stop the new openssh-server that I've compiled from source manually? How to start the new openssh-server on boot? When I install openssh-server using apt-get install, the config files will be installed into /etc/ssh/. If I compile and install it from source, where is the config file? If I compiled openssh-server from source, but I install openssh-client package using apt-get install, will there be any config files conflict? Thanks.

    Read the article

  • ApplicationPoolIdentity IIS 7.5 to SQL Server 2008 R2 not working.

    - by Jack
    I have a small ASP.NET test script that opens a connection to a SQL Server database on another machine in the domain. It isn't working in all cases. Setup: IIS 7.5 under W2K8R2 trying to connect to a remote SQL Server 2008 R2 instance. All machines are in the same domain. Using the ApplicationPoolIdentity for the web site it fails to connect to the SQL Server with the following: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. However if I switch the Process Model Identity to NETWORK SERVICE or my domain account the database connection is successful. I've granted the \$ access in SQL Server. I am not doing any sort of authentication on the web site, it is just a simple script to open a connection to a database to make sure it works. I have Anonymous Authentication enabled and set to use the Application pool identity. How do I make this work? Why is the ApplicationPoolIdentity trying to use ANONYMOUS LOGON? Better yet, how do I make it stop using the Anonymous logon?

    Read the article

  • My Reverse DNS PTR record seems to be right, but I'm still getting bouncing email

    - by johnbr
    Hello, I have a service (statusme.com) where I let people know (for example) that their kid's soccer games are cancelled because of bad weather. We send out emails to the people who have registered. I have a second server as a backup, (vps.statusme.com) and I've set up the application to send some of the email through the second server. But I'm getting complaints from various recipient SMTP servers that the email is considered spam. So I did some investigating, and it appears that they think my reverse DNS record isn't correct. But when I look at it via various rDNS websites and instructions I found elsewhere on ServerFault, everything looks correct: jb$ host -t a vps.statusme.com 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: vps.statusme.com has address 66.84.8.246 jb$ host -t ptr 246.8.84.66.in-addr.arpa 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: 246.8.84.66.in-addr.arpa domain name pointer vps.statusme.com. I'm confused about what I'm doing wrong. Thanks for any suggestions.

    Read the article

  • Transferring an SQL Processor License to a virtual hosted environment

    - by Andrew Shepherd
    My company is currently hosting a service in-house, and we want to move to an externally hosted environment. We would then be using a virtual server. I understand that this might be spread across multiple machines, but from my perspective as a customer, this layer is abstracted away - I shouldn't know or care about the hardware that the OS is hosted on. We have a licensed edition of SQL Server 2008. This is one Processor license. Will it be a violation of the licensing agreement to use this in a virtual environment. From the reference guide here it says When licensed Per Processor With Workgroup, Web, and Standard editions, for each server to which you have assigned the required number of per processor licenses, you may run, at any one time, any number of instances of the server software in physical and virtual operating system environments on the licensed server. However, the total number of physical and virtual processors used by those operating system environments cannot exceed the number of software licenses assigned to that server For enterprise edition there is an added option: if all physical processors in a machine have been licensed, then you may run unlimited instances of SQL server 2008 in one physical and an unlimited number of virtual operating environments on that same machine. I'm having trouble getting my head around this. Would I theoretically have to get a license for every processor in this virtual environment (which is effectively impossible because I have no way of knowing how many processors there actually are)? Or can I just say that it's hosted on one "virtual" server, so that's OK?

    Read the article

  • libvirt's dnsmasq does not respond to dns queries or provide dhcp

    - by Jeremy
    This is on Ubuntu 10.04 server, using KVM to run Ubuntu guests. This system has been working for a long time and I have not changed anything (other than applying security updates), but today I found dnsmasq no longer responds to requests. I cannot say how long this has been broken for me because I don't frequently use the NAT'd guests. So it could have started just after the last updates or some other event and I just now found it. I can connect to port 53 with telnet at 192.168.122.1. I've flushed ip-tables to be sure it wasn't firewall rules and that is not the problem. dnsmasq is running, virsh reports default network as stared. I can't find ANY information on troubleshooting libvirt dnsmasq except that it won't play well with other instances of dnsmasq, which is not the problem. I cannot even find where log entries might be for this service. Any ideas on where to look for more information? edit to add: I added another network and that one works fine. I guess I have a workaround but would still like to figure out how to troubleshoot this problem.

    Read the article

  • Django apache + mod_wsgi with virtualenv

    - by ArgsKwargs
    I have some questions running multiple Django sites on a VPS I have a server that uses openPanel to automatically create VirtualHosts within apache2. My ideal situation is that I would have multiple virtualenvs with different dependencies installed so the python dist-packages directory isn't contaminated for different Django sites. For example: /home/user/virtualenv1 /home/user/virtualenv2 My django applications reside at /var/www, so For example: /var/www/djangosite1 /var/www/djangosite2 Now I've read upon openPanel docs and figured out the best thing todo is create a django.conf file inside the mydomain.com.inc folder, which looks something like: /etc/apache2/openpanel.d/mydomain.com.inc/django.conf DocumentRoot /var/www/djangosite1/project WSGIScriptAlias / /var/www/djangosite1/project/wsgi.py WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages <Directory /var/www/djangosite1/project> Order allow,deny Allow from all </Directory> Alias /static /var/www/djangosite1/project/static-root Now my problem is that this setup seems unable to find the virtualenv site-packages thus not recognizing any dependencies available in the given virtualenv Also, commenting out this line doesn't seem to break or change a thing: WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages For example: > service apache2 start ImportError: No module named South When I install South outside the virtualenv everything works

    Read the article

  • Exchange 2003: recover mailbox when migrated to 2007

    - by lyngsie
    We have migrated almost all of our mailboxes from Exchange 2003 to 2007 so both are still up and co-existing in our domain. One mailbox is missing some mails, and I tried to recover from an Exchange 2003 backup using ExMerge as usual, but it throws an error: Error opening message store (MSEMS). Verify that the Microsoft Exchange Information Store service is running and that you have the correct permissions to log on. (0x8004011d) This should be normal when a mailbox has been migrated to 2007 and you try to recover a 2003 backup as stated here: http://www.petri.co.il/restoring-exchange-2003-mailboxes-exchange-2007-exmerge.htm We then tried the solution given in the article (copying the value "homemdb" for the user and pasting this in "msExchOrigMDB" for the Recovery Storage Group object in adsiedit). The error unfortuntely persists. So my question is, how can I extract a .pst from a mailbox in the recovery storage group when both Exchange 2003 and 2007 are running? I assume this is why the solution in the article didn't work. If no solution is possible using regular tools (exmerge, eseutil, etc.) which third-party tool should we choose - preferably the cheapest as we only need this one mailbox recovered.

    Read the article

  • CentOS 6.3 Virtual under OpenVZ cannot ping, host lookups, outbound connections while postfix running

    - by Paul Cravey
    My best theory is that some kernel limit is being hit preventing outbound connections. We have tried basically everything from tcpdumps to provisioning an entirely new virtual server (we do not have this problem on any other virtuals), however the problem somehow carried over, even with new postfix build (working). Emails work, and outbound connections work, so long as postfix does not have too much going on. /proc/user_beancounters shows no limits being hit (show below). Nevertheless, pings fail even to IP addresses. TCP stack appears healthy. Load is low. No iowait. Flushed iptables already. Has anyone experienced anything like this? uid resource held maxheld barrier limit failcnt 3: kmemsize 166216365 170262528 9223372036854775807 9223372036854775807 0 lockedpages 0 0 9223372036854775807 9223372036854775807 0 privvmpages 285727 351885 9223372036854775807 9223372036854775807 0 shmpages 16933 17605 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 numproc 150 303 9223372036854775807 9223372036854775807 0 physpages 314156 326191 0 1280000 0 vmguarpages 0 0 9223372036854775807 9223372036854775807 0 oomguarpages 165355 165355 9223372036854775807 9223372036854775807 0 numtcpsock 89 172 9223372036854775807 9223372036854775807 0 numflock 22 76 9223372036854775807 9223372036854775807 0 numpty 1 2 9223372036854775807 9223372036854775807 0 numsiginfo 0 75 9223372036854775807 9223372036854775807 0 tcpsndbuf 2733472 4371752 9223372036854775807 9223372036854775807 0 tcprcvbuf 1798336 5427296 9223372036854775807 9223372036854775807 0 othersockbuf 491120 1000760 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 238728 9223372036854775807 9223372036854775807 0 numothersock 361 505 9223372036854775807 9223372036854775807 0 dcachesize 135941831 136114679 9223372036854775807 9223372036854775807 0 numfile 2905 4990 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 8 9 9223372036854775807 9223372036854775807 0 [root@bni /]# ping 4.2.2.1 PING 4.2.2.1 (4.2.2.1) 56(84) bytes of data. --- 4.2.2.1 ping statistics --- 9 packets transmitted, 0 received, 100% packet loss, time 8493ms [root@bni /]# service postfix stop [root@bni /]# ping 4.2.2.1 PING 4.2.2.1 (4.2.2.1) 56(84) bytes of data. 64 bytes from 4.2.2.1: icmp_seq=1 ttl=53 time=8.63 ms 64 bytes from 4.2.2.1: icmp_seq=2 ttl=53 time=8.62 ms 64 bytes from 4.2.2.1: icmp_seq=3 ttl=53 time=8.63 ms 64 bytes from 4.2.2.1: icmp_seq=4 ttl=53 time=8.66 ms Outbound connections of all sorts fail when postfix is running.

    Read the article

  • How to record tv to network share with Windows Media Center?

    - by Peterdk
    Well, you would think that Windows 7's new MediaCenter would be up to the task of recording your TV to a network share/drive. Too bad, it looks like it's just not possible. I have a windows 2008 R2 server, and a Windows 7 machine with a TV card. Since my server has 2TB of storage, it would be nice to record directly to it's networked drive. (I mounted it as Z:). I tried the following: Selecting it in Media Center Itself: Not working. Not available. Editing the registry: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Media Center\Service\Recording , setting RecordPath to Z:\TV. Not working. Editing the registry: setting RecordPath to \\server\TV. Not working. Creating a Symlink (mklink \D) to Z:\TV and \\server\TV and setting that in the registry as RecordPath. Currently I am out of options. I could ofcourse Install Windows7 on my server, but I have no license for that, and my windows 2008 r2 is free from dreamspark. Are there people that are succesfully recording to a networked drive/storage? edit I also need to mention that I need to be able to acces the stored files from other PC's, like my laptop. So iSCSI is great for recording, but it looks like you can't access iSCSI devices from multiple PC's. Looks like sharing a iSCSI device is out of the question, so: Are there workarounds to get this thing recording to my network drive?

    Read the article

  • 503 Error After Microsoft Request Routing Is Installed - 32 bit 64 bit madness

    - by KenB
    I have a requirement to install the Microsoft Request Routing component for IIS 7.5 running on a Windows 2008 R2 SP1 64Bit machine. After installing Microsoft Request Routing via the Web Platform installer our ASP.NET 4.0 application gets a "HTTP Error 503. The service is unavailable." The Windows event log error details says: The Module DLL 'C:\Program Files\IIS\Application Request Routing\requestRouter.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a AMD64 processor architecture. The data field contains the error number. To learn more about this issue, including how to troubleshooting this kind of processor architecture mismatch error, see http://go.microsoft.com/fwlink/?LinkId=29349. I can make this error go away by changing the application pool to run in 32 bit mode by changing the "Enable 32-Bit Applications" setting to true. However I would prefer not to have to do that to resolve the issue. My questions are: Why is the Microsoft Request Routing feature trying to load a 32 bit version, isn't there a 64 bit version for it? How do I resolve this issue without having to change my application pool to a 32 bit mode?

    Read the article

  • Raid 5 with hot spare or RAID 10 with no hot spare?

    - by Boden
    Yes, this is on of those "do my job for me" questions, have some pity:) I'm at the limit for what I can do with the number of hard drives in a server without spending a substantial amount of money. I have four drives left to configure, and I can either set them up as a RAID 5 and dedicate a hot spare, or a RAID 10 with no hot spare. The size of each will be the same, and the RAID 5 will offer enough performance. I'm RAID 5 shy, but I also don't like the idea of running without a hot spare. I'm not so interested in degraded performance, but the amount of time the system is without adequate redundancy. The server and drives are under a 13x5 4 hour response contract (although I happen to know that the nearest service provider is at least 2-3 hours away by car in the winter). I should note that the server also has two RAID 1 arrays which would also be protected by the hot spare. Why don't they make drive cages with 9 bays! Heh.

    Read the article

  • "could not find suitable fingerprints matched to available hardware" error

    - by Alex
    I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session. I receive an error below the gnome login that says "Could not locate any suitable fingerprints matched to available hardware." What is causing this? here are the contents of /etc/pam.d/common-auth file # # /etc/pam.d/common-auth - authentication settings common to all services # # This file is included from other service-specific PAM config files, # and should contain a list of the authentication modules that define # the central authentication scheme for use on the system # (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the # traditional Unix authentication mechanisms. # # As of pam 1.0.1-6, this file is managed by pam-auth-update by default. # To take advantage of this, it is recommended that you configure any # local modules either before or after the default block, and use # pam-auth-update to manage selection of other modules. See # pam-auth-update(8) for details. # here are the per-package modules (the "Primary" block) auth sufficient pam_fprint.so auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config #auth sufficient pam_fprint.so #auth required pam_unix.so nullok_secure

    Read the article

< Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >