Search Results

Search found 20168 results on 807 pages for 'service'.

Page 598/807 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • Spammers sending out from an inactive domain

    - by YesIWillFixYourEmailSigh
    We have a shared hosting service running QMail and Plesk. One of our inactive clients was left active in the system by mistake, and spammers found their very weak passwords and sent out a massive barrage of messages before we caught the problem and shut off the services for that domain. My question is this: How did they get access to that domain in the first place? The client is long-gone and the domain/DNS is not pointing at our server at all, and neither is the MX record. So how were they able to find that domain and exploit it when nothing on the "outside" was pointing to it?

    Read the article

  • SQL 2008/2005 Hosting :: Error - “Named Pipes Provider, error: 40 – Could not open a connection to SQL Server”

    - by mbridge
    When setting up a Microsoft Windows Server 2008 system, I went through the motions to set up IIS, MS SQL Server 2008, and Visual Studio 2010 to use as a test-bed. One of the immediate benefits of setting up such a system is that most development can be done remotely: MS SQL Server Management Studio, Visual Studio’s Web development suite, as well as file shares, remote desktop, etc, make for a great way to remotely develop in ‘pristine’ conditions. But there are drawbacks, too, such as needing to deal with firewall issues, not being able to penetrate past a router or the requirement of setting up a VPN. One of the problems I encountered when trying to remote into the MS SQL Server 2008 that I’d set up was the following error: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server I followed the below steps, and was able to connect to the server after just a few moments of tinkering: 1. From the server in question, surf to this Microsoft article, and download and install the Firewall rules modification program. Never drop your firewall, even on a development machine, unless you have a really good reason to. 2. Launch SQL Server Configuration Manager. Navigate to SQL Server Network Configuration, then Protocols for your server name. Enable TCP/IP and Named Pipes by right-clicking and choosing Enable for each given Protocol Name. 3. Restart the SQL Server service from Services (or from command line, subsequently run “net stop mssqlserver” then “net start mssqlserver”. 4. Try your remote connection once more, and you should be able to connect. It’s not a terribly difficult concept, but one of the more challenging tasks developers face is dealing with environment setup. And while there is a certain blurred-line overlap between software development and server administration, sometimes the latter is daunting, especially given that you might set up only a handful of servers during your career.

    Read the article

  • Can't access link in network using fully qualified domain name

    - by user1033715
    I have installed windows server 2003 and configured Domain controller (domain name - xyz.com) and DNS service. for that I have configured fully qualified domain name as server.xyz.com also I have installed apache tomcat with port 8080 on that server and accessed link successfully using "http://localhost:8080", "http://ip address of server:8080", "http://server.xyz.com:8080". but its working for local machine, and when I tried to access it from another machine in same network using "http://ip address of server:8080" its worked for me. but when I tried it using fully qualified domain name i.e. "http://server.xyz.com:8080" it's giving me error, "Could not connect to server.xyz.com" Please guide me getting this setup done. I need to be able to access this link "http://ip address of server:8080" as "http://server.xyz.com:8080" outside my network Any suggestion are highly appreciated..

    Read the article

  • Run a script as root from apache

    - by Lord Loh.
    I would like to update my hosts file and restart dnsmasq from a web interface (php/apache2). I tried playing around with suid bits (the demonstaration). I have both apache and dnsmasq running on an EC2 instance. I understand that Linux ignores the setuid bit on text scripts, but works on binary files. (Have I got something wrong?). I added exec("whoami"); to the example C program in Wikipedia. Although the effective UID of the C program is 0, whoami does not return root :-( I would thoroughly like to avoid echo password | sudo service dnsmasq restart or adding apache to the sudoers without password! Is there a way out? How does webmin do such things?

    Read the article

  • What is an Automated way to Access a Virtual Machine's Webserver from its Host Machine? [closed]

    - by Jonnybojangles
    As a Web-Developer what is the most efficient (automated) way to connect to a Virtual Machine (VM) running a development webserver from it’s Host Machine (the machine running the VM) when you do not have control over the networks (home, Startbucks, work, etc) you are connected to? Currently I start my VM (a VirtualBox VM running CentOS), run ifconfig to determine the VM’s current IP. I then take that IP and map it my Host machine’s host file so that I can access the VM’s webserver from the Host. I feel that this is not an efficient way to connect to my VM’s webserver because each time I connect to a new network (a few times a day) I need to repeat the IP look up and host file update, and sometimes restart the VM's network service.

    Read the article

  • Google maps later traffic

    - by bobobobo
    Google maps has this feature of showing you how long it will take to get somewhere based on current traffic. But what about history, or asking for a prediction of how long it will take to get somewhere, during rush hour, for example? So, how long will it take to get there on this route if I leave at 5, vs how long it will take if I leave at 3. (A difference of 30 minutes on the road sometimes!) If google maps doesn't provide this service, iis there anywhere that does?

    Read the article

  • How do you host images using Windows Server so that they are accessible over the internet? [closed]

    - by nairware
    I was trying to figure out a way to host images (picture images, not disk images) such that they are accessible over the internet via URLs--in a way similar to a web service like Photobucket or ImageShack. I have a whole bunch of Windows Servers (Windows Server 2008 R2) available in the cloud. Instead of hosting images using Photobucket or ImageShack, I wanted to host this images directly on my own Windows cloud. This could be really complicated or really simple. I have no idea, as I know very little about IIS 6 (which is what I am using) or web servers. If this is too broad of a question (as there are probably multiple ways of implementing this), is there at least some guide or documentation of how someone else has setup image hosting? Perhaps a step-by-step guide of at least one way to do it?

    Read the article

  • Installing packetfence

    - by BrNathan
    I asked in the Ubuntu area, but thought I would get a better answer here. Was anyone able to install packetfence on Ubuntu 10? I tried a tutorial, but didn't have any luck. Some of the services installed and are working apache with php, snort, pfdetect, and pfdhcplistener. I can even get info with pfcmd node view all, but for the life of me I can't get it to work with apache2. When I run pfcmd service pf start I also get an error uninitialized value $_[7] in join at /usr/local/pf/lib/pf/class.pm line 170

    Read the article

  • Release Notes for 7/6/2012

    Happy belated 4th of July, everyone! Here are the notes for this week’s release on CodePlex: Implemented performance improvements to Git repositories. Fixed an issue that caused the final “click here” download link to fail in projects that display ads. Fixed an issue for certain projects that made it impossible to edit releases. Fixed an issue where the URL for a diff of a file would not take users to the diff in question. Fixed a rare issue that prevented a small subset of projects from modifying their project details. Fixed an issue where scrollbars were missing in our side-by-side diff viewer. Super- and sub-scripts now work properly in documentation. Addressed several usability issues around the diff viewer. Fixed an issue where the scrollbar could disappear in the advanced issue tracker if a user opens a modal dialog. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Hyper-V Manager - Host Access During a Catastrophe

    - by LonnieBest
    How can I ensure that I can always have Hyper-V Manager access to a Hyper-V server, even in the event that the Active Directory Server is down (in a domain-login environment)? Background: The one that came before me, set up the company's servers as virtual machines on top of a host running Hyper-V Server 6.1 (7601) Service Pack 1. For managing Hyper-V, he installed Window 7 onto a virtual machine (run on the same host) with Hyper-V Manager installed. When the (virtual) Active Directory server (run on this same host) is rebooted, during that reboot, I'm unable to RDP into the Windows 7 virtual machine, and I'm therefore unable to access Hyper-V Manager when the Active Directory server is down. I suspect I can't login because I can't authenticate with the Active Directory Server. I'm going to install Hyper-V Manger onto some addition manager's workstations, but how can I ensure they'll have access in a catastrophe where Active Directory authentication isn't possible?

    Read the article

  • How to extend a file definition from an existing module in the node?

    - by c33s
    I use an older version of the example42 mysql module, which defines the mysql.conf file but not its content. Mmy goal is to just include the mysql module and add a content definition in the node. class mysql { ... file { "mysql.conf": path => "${mysql::params::configfile}", mode => "${mysql::params::configfile_mode}", owner => "${mysql::params::configfile_owner}", group => "${mysql::params::configfile_group}", ensure => present, require => Package["mysql"], notify => Service["mysql"], } ... } node xyz { include mysql File["mysql.conf"] { content => template("mymodule/mysql.conf.erb")} } The above code produces a "Only subclasses can override parameters" What is the correct way to just add a content definition to an existing file definition?

    Read the article

  • Open-sourcing a proprietary library without certain features

    - by nha
    I hope I'm in the right place to ask that. I have a question regarding the practice of open-sourcing a proprietary library that we built and use at work. The licence will probably be MIT. I like the idea, but here comes the unusual part : I have been tasked to remove some of the most advanced features. Those will remain on our servers, available as a service. We will open-source the (JavaScript in case it is of interest) library, along with a minimal associated server code. I am not asking a question about the technical problems (I imagine we will have to maintain and synchronize somehow different repositories, maybe with incompatible pull requests, but this for stack overflow). What I would like to know is: How that would be perceived by the community at large ? Does it risk killing the eventual interest in this library? I don't personally know of any library that works like that. I'm pretty sure it is possible however, but any evidence of such a library is welcome (successful if possible). That's also because I'd like to see how they present it. More importantly, what could be the rationale for/against it? I'm not sure I understand the consequences of doing it so.

    Read the article

  • Services of virtual machines

    - by RredCat
    I am looking a way to establish dev machine on the cloud. I want to have access for development from different places. I don't want to play with sync brunch and so on. It would be Lubuntu with 1G or 512M memory and I want to have way to setup my image. What I have found in the current moment: Amazon EC2 service Azure Virtual Machines I am pretty sure that there should be more services like this. I hope to find one specified for this purpose. I have experience to work in such way and I like it. Unfortunately it was server of certain company for projects of this company and I can't use it for my private aims. Could anybody suggest me anything?

    Read the article

  • SSH running slow on cygwin

    - by Robb
    I have a Windows XP box with Cygwin running and the SSH service. I'd like to use PuTTY to connect to it from other computers on the local network. PuTTY works fine and I actually get a relatively speedy login prompt. But anytime I do an 'ls' on the root directory ('/') it typically doesn't complete, like the command is hung. Other PuTTY sessions suffer as well, no matter what i'm doing (even just an 'ls' on my home directory might take awhile or not finish). It is like a deadlock occurred somewhere in the ssh/cygwin system. The root directory does contain the 'cygdrive' folder which is the contents of the host computer. Could this be causing the slowdown?

    Read the article

  • Is it possible for a router to "go bad" with time?

    - by JQAn
    I've been having problems with my internet connection over the past weeks (intermittent disconnections, slow transfers, etc), and my provider keeps telling me that the problem is not on their end. I have cablemodem with a wifi router (this router was not provided by them). The router is quite old (DIR-300), so I'm starting to wonder if it could be the issue and if I should replace it. Is it possible that it is the cause? Can they become so outdated that they cause intermittent interruptions of service? If I reset the modem and the router, they work fine for a few hours, but the problems starts again after a while.

    Read the article

  • TCP and fair bandwidth sharing

    - by lxgr
    The congestion control algorithm(s) of TCP seem to distribute the available bandwidth fairly between individual TCP flows. Is there some way to enable (or more precisely, enforce) fair bandwidth sharing on a per-host instead of a per-flow basis on a router? There should not be an (easy) way for a user to gain a disproportional bandwidth share by using multiple concurrent TCP flows (the way some download managers and most P2P clients do). I'm currently running a DD-WRT router to share a residential DSL line, and currently it's possible to (inadvertently or maliciously) hog most of the bandwidth by using multiple concurrent connections, which affecty VoIP conversations badly. I've played with the QoS settings a bit, but I'm not sure how to enable fair bandwidth sharing on a per-IP basis (per-service is not an option, as most of the flows are HTTP).

    Read the article

  • strange memory usage pattern on windows server 2008 on login through remote desktop..

    - by headsling
    I'm running Windows Server 2008 Datacenter Service Pack 2 on a VM Ware instance with 10Gb ram allocated. I'm not running IIS or SQL Server. Under 'normal' conditions, the machine uses ~5.5Gb of memory. However, when I login to the server through remote desktop, the memory usage slowly climbs up to 9.8Gb of memory in use. After several minutes the memory slowly creeps back down to the 5.5Gb mark. I've tried killing all the processes associated with my login, on login, barring the taskmanager without success, and I can't see any process that is growing in memory usage when the memory is increasing. I'm assuming this is some system level cache that is growing / shrinking... but why is it doing this?

    Read the article

  • disbale ssh for bnroot as root account

    - by user2916639
    i am beginner with centos - Linux i have dedicated server . my root username is bnroot . now i am taking ssh using this user. i want to disable ssh for bnroot. i have created user user name welcome i want take ssh login by welcome user then i ll use su - bnroot to get root privileges. i have set PermitRootLogin no , AllowUsers welcome IN /etc/sshd_config and after restarting sshd service . i take ssh login by welcome use then it is ok. but when i use su bnroot its prompt to password and i enter right passowrd it show su: incorrect password , i dont know where i am wrong . please help me here. changes i done - /etc/ssh/sshd_confid PermitRootLogin no AllowUsers welcome /etc/sudoers welcome ALL=(ALL) ALL getting error in /var/log/secure unix_chkpwd[666]: password check failed for user (bnroot) su: pam_unix(su:auth): authentication failure; logname=ewalletssh uid=503 euid=500 tty=pts/1 ruser=ewalletssh rhost= user=bnroot please let me know where i am wrong

    Read the article

  • WAMP: Apache refusing connections outside the network

    - by JoeWolf
    I have wamp installed. I ran the server, everything is running fine from localhost and my local ip address. I forwarded port 80 on my router. Whenever I try to access the server from the outside, using my real ip, it doesn't work and timeouts. I though port forwarding is not working, forwarded another port for different service and it went through, so the problem is with apache. I checked the error log, didn't find any errors. Skype is off. Any tips what could be causing this? Thanks!

    Read the article

  • How do I check if a program can potentially be a virus?

    - by acidzombie24
    I am running Windows XP in a VM. I want to download a few applications and install the one by one and check if they potentially can be a virus. I assume virus would need to add something to the startup folder, or the application in the startup section in the registry or add a service. What else might it do to become active? Anyway, how can I check to see if a program may be a virus? I use hijack this to get a list of processes and I simply compare it from before I installed to after and see if there's anything different. Is this good enough? My main OS is Windows 7 but I do not have that in a VM and don't see a reason to test with that.

    Read the article

  • Cannot logon guest account in windows 7

    - by Javy
    I'm using Windows 7 Home edition. When I try to create any guest account, it fails to load at login with the error: "The User Profile Service failed the logon. User profile cannot be loaded” I can login as admin and my home user with no problems. Every guest account that I create fails. I found this on a microsoft text article: This error may occur if the "Do not logon users with temporary profiles" Group Policy setting is configured. I've tried to find the Group Policy settings and cannot locate it anywhere. Some sites indicate I need to upgrade windows to access it. Is there a way to use guest accounts without upgrading?

    Read the article

  • All MySQL Databases lost overnight

    - by Iain
    After a call from a customer to say that his website is down, I find that MySQL on our RackSpace Cloud Windows 2008 server was not running. I restarted MySQL but got the 'Access denied for user' error in the browser for all websites with MySQL database. When I look in MySql Server 5.5/data there are no folders other than mysql and performance_schema. It appears all the databases and data have been wiped. Does anyone know what might have happened and where the data has gone? To top that I just found this server is missing from our backup service. ps appears to be after windows update at 4:01 this morning.

    Read the article

  • Is there a way to edit an existing nautilus (file manager) bookmark?

    - by C.W.Holeman II
    Is there a way to edit an existing nautilus (fie manager) bookmark? Invoke from Linux command line: $ nautilus Activate connection editor: File>Connect To Server...> Complete entries in the pop up: Service Type: [WebDAV (HTTP)] Server: [localhost] Port: [8001] Folder [webdav] Username: [test] [x] Add bookmark Bookmark name: [/dav] <Connect> Then in the left column of the main window the new connection and bookmark exist: Places ------------------- ausername Desktop File System Network WebDAV on localhost Trash -------------------- /dav Right click on "/dav" pop up menu: Open Open in New Tab Open in New Window ------------------ Remove Rename... There is no option for editing.

    Read the article

  • Does (should?) changing the URI scheme name change the semantics?

    - by Doug
    If we take: http://example.com/foo is it fair to say that: ftp://example.com/foo .. points to the same resource, just using a different mechanism for resolving it (and of course possibly a different representation, but perhaps not)? This came to light in a discussion we were having surrounding some internal tooling with Git. We have to process some Git repositories, and they come to use as "git@{authority}/{path}" , however the library we're using to interface with them doesn't support the git protocol. I suggested that we should make the service robust in of that it tries to use HTTP or SSH, in essence, discovering what protocols/schemes are supported for resolving the repository at {path} under each {authority}. This was met with some criticism: "We don't know if that's the same repository". My response was: "It had better be!" Looking at RFC 3986, I see this excerpt: URI "resolution" is the process of determining an access mechanism and the appropriate parameters necessary to dereference a URI; this resolution may require several iterations. To use that access mechanism to perform an action on the URI's resource is to "dereference" the URI. Which makes me think that the resolution process is permitted to try different protocols, because: Although many URI schemes are named after protocols, this does not imply that use of these URIs will result in access to the resource via the named protocol. The only concern I have, I guess, is that I only see reference to the notion of changing protocols when it comes to traversing relationships: it is possible for a single set of hypertext documents to be simultaneously accessible and traversable via each of the "file", "http", and "ftp" schemes if the documents refer to each other with relative references. I'm inclined to think I'm wrong in my initial beliefs, because the Normalization and Comparison section of said RFC doesn't mention any way of treating two URIs as equivalent if they use different schemes. It seems like schemes named/based on IP protocols ought to have this notion, at least?

    Read the article

  • Nginx works on my linux machine but is not accessible from other computers in my local network

    - by crooveck
    In my LAN network I have a server with Scientific Linux (RedHat or Fedora based distro), I've done yum install nginx but the welcome page is not accessible from other computers in my network. When I do telnet open localhost 80 and then GET / HTTP/1.0 I get some html code from nginx, so it's running for sure. But when I want to connect remotly, doing telnet open 192.168.3.130 80 I get: Trying 192.168.3.130... telnet: Unable to connect to remote host: No route to host So I assume that there is something wrong with my network settings, maybe iptables or something else? Next step, I turned off iptables: service iptables stop and it helped, now I can connect remotely using telnet. So I think, I need to fix my iptables rules. I did some googling and found this rule -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT but it still didn't allow me to connect remotely when iptables is up. Can someone please help me setting a proper iptables configuration?

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >