Search Results

Search found 23831 results on 954 pages for 'google accounts'.

Page 536/954 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • Not able to use FTP acount created with pure ftpd

    - by user1513613
    I made the new user using this command pure-pw useradd droa -u 52007 -g 52009 -d /home/droa/public_html But when i connect using ftp , it says that Login authentication failed. Which other setting i need to use. I also have cpanel where i sued to create accounts. Even i checked the /etc/pureftpd.passwd File as well , but it only had one user which i created. I don't know which ftp does cpanel uses. The documentation of pureftp says that i need to compile with (--with-everything) Is there any way to do that without re-compiling

    Read the article

  • Tracking a subdomain serately within the main domain account [closed]

    - by Vinay
    I have a website, for example: xyz.com and info.xyz.com. I created a profile for xyz and tracking is good. I added a new profile for info.xyz.com in xyz.com. Analytics tracking for info.xyz.com is showing traffic from both xyz.com and info.xyz.com. How do I change to show only info.xyz traffic in the info.xyz.com profile. I used the following code: Analytics code for xyz.com domain: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxx-x']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Analytics code for info.xyz.com <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-xxxxxx-x']); _gaq.push(['_setDomainName', 'xyz.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • Tool/Program/Script/Formula for deciphering Active Directory Connection Strings for 3rd party user i

    - by I.T. Support
    We're using WSFTP, which has an Active Directory Integration module. To populate the user accounts you need to provide a connection string akin to: OU=Users,DC=domain,DC=com CN=Domain Users,OU=Users,DC=domain,DC=com Questions: Is there a Tool/Program/Script/Formula that allows me to decipher how these strings might look based on what I can see in Active Directory Users & Computers? Is there a proper/accepted name for these types of connection strings? I don't even know what to Google to get more information about how to format one properly How would I troubleshoot the connection string if I think it looks correctly formatted, but it isn't working? Thanks!

    Read the article

  • Automatic o/s reset on a dedicated internet browsing Windows 7 pc.

    - by camelCase
    I have just purchased a new Acer Revo nettop PC for dedicated internet browsing. It will be the only pc on a home network. My original plan was to install one virtual PC for family browsing, another for remote web based server administration and ban browser use from the host Windows 7 o/s. The idea was that I could recover to a fresh VHD image once a week to eliminate any build up of malware inside the browser VMs. However now I am looking for alternative solutions since the Intel Atom cpu does not have hardware VT support which Windows Virtual PC requires. Would it be possible to engineer some type of routine overnight host o/s wipe and recovery? I guess cyber cafes do something like this? The only user data that would need to be retained across a recovery would be browser bookmarks but these could be exported to remote service. Edit 1: I am thinking the o/s reset could be done via some disk image recovery process. Edit 2: Just had a brainwave. Routine browsing could be done via the new Google Chrome O/S. I have just seen a video of the Google Chrome o/s booting off a usb pen drive in seconds.

    Read the article

  • Windows 2003-R2-Server: Process "System" takes large chunks of CPU time

    - by Dabu
    I have a domain controller running 2003 R2. The server behaves very well when restarted daily, however, each day it is not restarted, there's a process called "System" that takes enourmous chunks of CPU time (up to 95%). The server supports AD, WINS, DNS, has Kaspersky Endpoint Security running, and manages backups via Arcserve 15. When I tried so far: Process Explorer (ex-Sysinternals) shows that the "System" process has no sub-processes. In the "Threads" tab of the detailled view I can see that 90% of the CPU time is used up by "ntkrnlpa.exe+0x803c0". The "Interrupts" process is running at 3-5% of CPU time, I'm not sure if this accounts for the amount of CPU time that System takes.

    Read the article

  • sharepoint 2007 access denied when accessing user profiles via ssp

    - by user22215
    Guys I have a really strange problem in regards to sharepoint mysites today I go into user profiles and properties in order to setup a property all of a sudden I get access denied. First off I know that I'm logged in with the correct account after the access denied I decided to click on personalization services and permissions I than get An unhandled exception occurred in the user interface.Exception Information: Cannot complete this action. I'm not seeing anything in the server application logs either. So have any of you guys seen this before is there some kind of way to grant a user account the manage profiles right permission using stsadm. BTW all other fucntions of the ssp are working fine so my question is if the user profiles and my sites of a ssp tanks how do you repair that portion of the ssp? BTW the user accounts that I'm using are site collection owners and also they have full control at the web application level. I actually ran across this interesting post but this does not really help my problem. http://blog.tylerholmes.com/2008/09/access-denied-for-site-collection.html

    Read the article

  • Where should my application setup put the binary executables in Windows 7?

    - by KeyboardMonkey
    I created a small Windows app, and am builder a setup for it using NSIS, but what I can't find out is where to put the executables to conform to the new Windows security model. Traditionally we put program files in, well, "c:\program files". With the security model getting more mangled with each Windows version, some users have restricted accounts, and I'm not sure installing into program files will work for these users. Where can I install my program's files that will cater for these lower-privileged users? Oh and I want to avoid ClickOnce.

    Read the article

  • How to find spyware dll launched using svchost.exe

    - by Sheen
    This weekend I found my PC was possibly infected by some virus or spyware. There is one "svchost.exe -k netsvcs" in my task manager, and it is running under my user name, rather than SYSTEM accounts. There is already another same process with same command line options under SYSTEM account. This user account svchost.exe consistently consumes 50% CPU (1 of 2 cores of my CPU). In Process Explorer, I can see it is started by explorer.exe, instead of services.exe. However, I failed to find its real service dll place in registry or disk. Does anyone know how to find this malicious program?

    Read the article

  • How can I copy a SQL Azure Database to a server on a different subscription?

    - by Tragedian
    I'm trying to create a copy of a SQL Azure database. The source and destination servers are associated with two different subscriptions, but they are located in the same data-centre. I've been reading Copying Databases in Windows SQL Azure Database and How to: Copy Your Databases (Windows Azure SQL Database) for instructions on this, but I'm not sure if my scenario is covered. I would like to use the CREATE DATABASE Database1B AS COPY OF Database1A; command, but I don't know what the implications are on the accounts used, or what I need to set up between the two databases before this command is possible. Has anybody achieved this type of copy and can elaborate?

    Read the article

  • SBS2011 Standard DNS suddenly not resolving some domains

    - by Matt
    Suddenly today I am unable to resolve common domains like serverfault.com, facebook.com; but other domains like google.com, cnn.com work fine. This is on a client machine (Win7 Pro) connected to an SBS2011 Standard domain. The only DNS server is the SBS2011 server. The same domains work fine on all client PCs I have tried, and the same ones do not work. Using nslookup, I get 'no such domain' errors for facebook.com, and the correct DNS entries for the ones that do work. When I add Google's Public DNS to my client PC as a backup (primary = local SBS server, secondary = 8.8.8.8), everything works fine for my client PC, but querying from the SBS server directly or from other client PCs are broken (so I don't believe it's a firewall issue). My main question is how can I see what servers the SBS2011 server queries if it doesn't know about a domain? There is nothing in our firewall logs that say it blocked any DNS-based packets, but I also wanted to query based on the IP/FQDN on the servers that the SBS server was likely to contact to find out about facebook.com for example. Update 23/05/2012: It appears DNS is working again this morning for the affected websites. Both the DC on its own and all client PCs can once again access the websites that were not loading last night, as well as the websites that were working. I haven't changed anything overnight, so it appears that there was some kind of temporary glitch, but I can't understand what would have caused it on the network.

    Read the article

  • In 2011, what are the reasons to stick with plain text mails?

    - by Aaron Digulla
    People entering college today have never known a world without an Internet. HTML was invented 1980, that's more than thirty years ago or 1.5 generations. But plain text mails are still common despite all their problems: Encoding issues Wrapped code segments No links No way to use the "a picture says more than a thousand words" lore Most of the security risks are now handled by the underlying browser engine and smart settings like: Don't allow JavaScript in mails Don't execute attachments Don't download external resources (like web bugs) On top of that, only very few people still read mail only in command line tools like Mutt. Knowing Mutt myself, I'm pretty sure you can configure it to display HTML mail with, say, w3m. On top of that, most HTML mail capable clients send two versions of the mail (pure text with an HTML attachment). I'm not sure if there are any people left on the planet which still use a 56kbit modem to access their mail accounts. So what reasons are left to stick with plain text mails in 2011?

    Read the article

  • SSO solution and centralized user mgmt for about 10-30 Ubuntu machines?

    - by nbr
    Hello, I'm looking for a clean way to centralize user management. The setup: About 10-30 linux machines (Ubuntu 10.04 LTS server) Maybe 10-30 users for now. The requirements (hopes and expectations): A single place for the administrator to manage user accounts, passwords and the list of machines each user has access to. (And probably groups.) Doesn't have to be fancy. Single sign-on for SSH: the user should be able to login from machine A to machine B without re-entering his/her password. A Quick Google searches give me pointers to OpenLDAP and Kerberos, but I'm not sure where to start and what problem will each solution actually solve. Which way to go? I'd love to find a clear that focuses on this subject. (Or: am I asking "a wrong question"?)

    Read the article

  • Ubuntu server apt-get says "(-5 - No address associated with hostname)"

    - by Srini
    I have a ubuntu 12.04 server. Running sudo apt-get update on it produces errors like this: W: Failed to fetch http://au.archive.ubuntu.com/ubuntu/dists/precise-backports/main/binary-i386/Packages Something wicked happened resolving 'au.archive.ubuntu.com:http' (-5 - No address associated with hostname) I am able to ping all the other hosts on the network and also Google's DNS 8.8.8.8. But am unable to ping www.google.com. So, I'm guessing something is wrong with my DNS setup, but not sure what. I use static IP and my /etc/network/interfaces looks like this: auto eth0 iface eth0 inet static address 192.168.1.50 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.0.255 gateway 192.168.1.1 #dns-nameserver 203.12.160.35 203.12.160.36 #nameserver 203.12.160.35 203.12.160.36 My /etc/resolv.conf and /etc/resolvconf/resolv.conf.d/base are both empty and my /etc/resolvconf/resolv.conf.d/original says: nameserver 192.168.1.1 Any help would be greatly appreciated. P.S. I've googled it a bit and the common resolution is to switch to DHCP which I don't want to do since this is my home server. Thanks Srini

    Read the article

  • Microsoft CALs for Domain Controller

    - by Damo
    I am designing a network and I've come to the point of specifying out the number of CALs required for this network. Microsoft licensing has always confused me, it's just not always clear to me. I plan to have 1 2008 std domain controller, another 2008 server (not a domain controller) and 200 Windows 7 devices connected to the domain for domain services. The 200 W7 devices will all authenticate to the domain controller with the same domain account. (this is a special type of network, not a user workstation network) Therefore, do I need to purchase 200 CALS for the 200 devices, or can I purchase say 10 CALS (user CALS) as the amount of unique user accounts is very low. Many thanks for looking.

    Read the article

  • How to defend agains botnet http requests

    - by Killercode
    I have a server with WHM + CPanel and 5 of my costumer got infected with zbot. This means that the domains they have are constantly receiving requests to certain destinations. I tried to use mod_security but seems that it can't filter every requests... I don't really know why? I still see in the access log the connection comming in and it's consuming a LOT of bandwidth and server load Those accounts have already been clean so all of those requests go to error 404 (the ones catched on mod_security I am dropping the connection). Is there anymore ways to defend against this requests?

    Read the article

  • Enabling SFTP Access within PLESK

    - by spelley
    I have a client who wants to ensure his upload is secure, so we are trying to enable SFTP for him on our Linux PLESK server. I have enabled SSH access to bin/bash for FTP accounts, and created a new user. When I attempt to SFTP using either the IP address or the domain name, this is the error FileZilla is giving me: Error: Authentication failed. Error: Critical error Error: Could not connect to server Here is some basic information regarding the server: Operating system Linux 2.6.24.5-20080421a Plesk Control Panel version psa v8.6.0_build86080930.03 os_CentOS 5 I had read in some places that I should reboot the SSH Service in Server - Services, however, there is no SSH Service within the list. I'm not really a server guy so it's quite possible I'm missing something obvious. Thanks for any help that you guys can provide!

    Read the article

  • Tools to manage large network of heterogeneous web applications?

    - by Andrew
    I recently started a new job where I've been tasked with managing a global network of heterogenous web applications. There's very little documentation. My first order of business is to create an inventory of all of the web applications. Are there any tools out there to manage a large group of web apps? I'd like to collect a large dataset for each website including: logins for web based control panels logins to FTP/ssh accounts Google analytics tracking code for each site 3rd party libraries used SSL certs, issuers, and expiration dates etc I know I could keep the information in Excel or build a custom database, but I'm hoping there's already a tool out there to help me with this.

    Read the article

  • Leave Windows Session Logged On

    - by Kyle Brandt
    Is a bad idea for any reason to leave accounts logged onto Windows remote desktop sessions? So instead of logging off, just closing the session so it locks. In this case, the limited number of remote desktop connections is not an issue. I am just wondering if anyone has seen sessions leak memory over time or maybe security issues with doing this, etc... I could see if programs were left open they might suck up and or leak memory, but has anyone seen this with Microsoft software such as Control Panels, Management Consoles, and Exchange System Administrator?

    Read the article

  • Server resolve issues not consistent

    - by bobthemac
    I am having some weird issues with my web server. It has a public ip address and is set-up on an openVZ virtual machine. Accessing in to the site works fine every time but when trying to access out from the server I can't always connect out. Sometimes I can connect out and resolve addresses, sometimes I can't. The issue is visible in both ssh when trying to do a wget command on Google; sometimes it works and I get the index.html page and sometimes I get nothing. The issue is more visible in wordpress where you can't view themes but after a few presses of the try again button you can then view them. I have searched google and found nothing about this issue. Does anyone here have any ideas what could be causing this strange behaviour? Ports 80 and 2222 are open for web and ssh. Failed 17:26:33.398412 IP 86.148.184.124.38445 > 176.9.36.252.http: Flags [.], ack 98383, win 632, options [nop,nop,TS val 3070086 ecr 323106946], length 0 [email protected]..|. $..-.P..,.e......x....... .....B8. Passed 17:30:00.179630 IP 146.90.206.241.50091 > 176.9.36.252.http: Flags [F.], seq 1, ack 1, win 115, options [nop,nop,TS val 13740559 ecr 323308537], length 0 [email protected]... $....P.w...x.....s(K..... .....EK. Thanks in advance

    Read the article

  • OpenVPN + iptables / NAT routing

    - by Mikeage
    Hi, I'm trying to set up an OpenVPN VPN, which will carry some (but not all) traffic from the clients to the internet via the OpenVPN server. My OpenVPN server has a public IP on eth0, and is using tap0 to create a local network, 192.168.2.x. I have a client which connects from local IP 192.168.1.101 and gets VPN IP 192.168.2.3. On the server, I ran: iptables -A INPUT -i tap+ -j ACCEPT iptables -A FORWARD -i tap+ -j ACCEPT iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On the client, the default remains to route via 192.168.1.1. In order to point it to 192.168.2.1 for HTTP, I ran ip rule add fwmark 0x50 table 200 ip route add table 200 default via 192.168.2.1 iptables -t mangle -A OUTPUT -j MARK -p tcp --dport 80 --set-mark 80 Now, if I try accessing a website on the client (say, wget google.com), it just hangs there. On the server, I can see $ sudo tcpdump -n -i tap0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap0, link-type EN10MB (Ethernet), capture size 96 bytes 05:39:07.928358 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 558838 0,nop,wscale 5> 05:39:10.751921 IP 192.168.1.101.34941 > 74.125.67.100.80: S 4254520618:4254520618(0) win 5840 <mss 1334,sackOK,timestamp 559588 0,nop,wscale 5> Where 74.125.67.100 is the IP it gets for google.com . Why isn't the MASQUERADE working? More precisely, I see that the source showing up as 192.168.1.101 -- shouldn't there be something to indicate that it came from the VPN? Edit: Some routes [from the client] $ ip route show table main 192.168.2.0/24 dev tap0 proto kernel scope link src 192.168.2.4 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.101 metric 2 169.254.0.0/16 dev wlan0 scope link metric 1000 default via 192.168.1.1 dev wlan0 proto static $ ip route show table 200 default via 192.168.2.1 dev tap0

    Read the article

  • CPanel: Every url is being redirected to http://:2083

    - by Frank
    On my cpanel server, I restored about 50 accounts from crashed cpanel server. All of the sites were working fine, but suddenly without changing anything, every site started to get redirected to url "http://:2083/"., There is nothing in logs, no errors. when i do wget it says: wget grinfeld.com.br --2012-09-04 13:18:23-- http://grinfeld.com.br/ Resolving grinfeld.com.br... 198.101.221.254 Connecting to grinfeld.com.br|198.101.221.254|:80... connected. HTTP request sent, awaiting response... 301 Moved Location: https://:2083/ [following] https://:2083/: Invalid host name.

    Read the article

  • What are the risks in putting website files in the "root" folder of a shared web hosting server?

    - by Obay Ouano
    A site I've been asked to manage is hosted (shared) on GoDaddy, with this folder structure: / public_html public_ftp mail stats logs etc... However, the website files are stored in the / folder, and NOT in public_html. I'm not sure if this is how GoDaddy sets up their customers' accounts, or if the old web developer accidentally changed it from public_html to root. But when we call up GoDaddy to tell them to correct this (move files to public_html), they won't change it and insist that there is no security risk unless someone gets a hold of the FTP password. Is this true? (I have always read that website files should be inside public_html.) If not, where could this setting be changed? The .htaccess is empty.

    Read the article

  • Server side url scanner for malware, spyware , viruses and protect my visitors

    - by Vangel
    I have a forum/groups site that contains a lot of external URLs, sometimes direct download links. I want to protect my visitors from possible attacks from malware sites as they are mot likely to click on these links. CUrrently I implement DBL (spamhaus) but thats not enough. I want to run a background task to check the outgoing links first. I have looked at similar questions in StackOverflow (wrongly posted there) and here but fail to find a question same as mine or a good answer. People have suggested ClamAV , I don't believe it can detect Web hosted malware sites and its has a lot of missed detection. I have looked at google safe browsing service ( http://code.google.com/apis/safebrowsing/developers_guide_v2.html very complicated to implement or maintain plus midway I get lost :S ) I can go for commercial solution, anything to protect the visitors and my site brand. But I would like to hear the opinion of server admins and if anyone has implemented such a service. My Server is basic CentOS LAMP stack. thank you very much in advance.

    Read the article

  • Schedled Tasks and Environment Variables

    - by Andrew J. Brehm
    I have a scheduled task, a batch file, that uses an environment variables which is set system-wide. On server 1, the scheduled task runs under a domain account and the environment variable works. The environment variable also exists in my session and when I runas as the service account. On server 2, the scheduled task runs under a different domain account and the environment variable DOES NOT work. However, the environment variable does exist in my session and when I runas as the service account. On both servers the environment variable has been set system-wide by the same script originally. The script runs again every now and then and as far as I can see noone has tempered with the environment variable. The scheduled tasks are set up identically on the two servers (using the same XML file) and the two service accounts are identically configured (as far as I know). What am I doing wrong?

    Read the article

  • Grant user from one domain permissions to shared folder in another domain

    - by w128
    I have two computers set up like this: \\myPC (local Windows 7 SP1 machine); it is in domain1; \\remotePC (Win Server 2008 with SQL Server - a HyperV virtual machine); it is in domain2. In domain2 active directory, I have a user account RemoteAccount. I would like to give this account full permissions to a shared folder located on \\myPC, i.e. folder \\myPC\SharedFolder. The problem is, when I right-click the folder and go to sharing permissions, I can't add permissions for the domain2\RemoteAccount user, because this user cannot be found - I can only see domain1 users. When I click 'Locations' in "Select users, computers, service accounts, or groups" dialog, I only see domain1. Is there a way to do this?

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >