Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 191/401 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Limit maximum incoming connections to a port using iptables

    - by Harley
    I have a server that has apache listening on a number of ports. Some ports are used for configuring the server, and another is used to download large files. My problem is that when I have a large number of clients downloading files, the web interface is uncontactable. I would like to limit the number of clients connecting on the "large file" port so that apache always has available connections to configure the server. A REJECT is fine, the client trying to download the file will back off and retry later. Each client only has one connection open to the server at a time, so limiting by IP won't work. I know I could put something in front of apache to manage this, but I'd really like to do it in iptables, without adding more software.

    Read the article

  • My server appears to have been hacked+ scanssh run by zabbix is it normal?

    - by Niro
    I'm running a few EC2/Scalr instances with zabbix monitoring. I received complaints about one of my servers port scanning other servers. the logs show it is accessing port 22 on consecutive IP addresses. I looked at the processes list and saw scanssh is running under the user Zabbix. My question is- Is scanssh part of zabbix? Is it suppesd to run? I have active autodiscovery on zabbix but it is looking at another IP addresses and definately not port 20. Is it possible that something in the config of zabbix agent is controlling it and not the settings on zabbix server? What can I do to find out if zabbix is somehow misbehaving or it is a hacker? Any advice is highly appreciated.

    Read the article

  • Cached Network Share Credentials?

    - by Brian Wolfe
    Hi, I have an issue in Windows 7 where I get the following error message when attempting to access an admin network share on a machine in another domain: "Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again." Troubleshooting I've Done Start Run cmd net use * /DELETE Start Manage Windows Credentials Deleted all credentials I still receive the same error until I reboot my machine. After I reboot, it works fine. However, I am able to log into the admin share if I hit it by it's IP address. QUESTION My question is, is there somewhere else I should be looking for cached user credentials? Thanks, Brian

    Read the article

  • How to get iTunes to recognize that certain ID3 tags have been removed from files?

    - by Nick Meyer
    I use MusicBrainz Picard to manage the ID3 tags on my MP3 collection and iTunes for playing them and managing my iPod. I have Picard set to remove the Composer tag from any files that don't also contain the word "Classical" in the Genre tag. I've run Picard and it has removed the Composer tag from many of my files. However, iTunes keeps displaying the old value of the Composer field. I have tried right-clicking on the file to update, choosing "Get Info" and clicking OK. This has no effect on the Composer field when it has been removed entirely; it only refreshes the displayed value when it has been changed from one non-empty value to another. How can I get iTunes to stop showing the old values for tags that have been removed? Please note I don't want to lose other iTunes metadata such as ratings and play counts, so removing the files from the iTunes library and re-adding them is not an option.

    Read the article

  • Page File - Why set one for each drive?

    - by Magic
    Hello, I have Windows Vista Business Edition running on my laptop (brand is HCL). I have 4 HDD which are as follows - C - 29.2 GB (of which only 3.68 GB is free) D - 39 GB (of which 37.8 GB is free) E - 39 GB (of which 37.3 GB is free) F - 41.6 (of which 41.4 GB is free) However, my page file settings are as below. Automatically manage paging file for all drives. Question - Why should I set one for each drive? Should I set my page file on the OS Root Drive. I happen to talk to a System Administrator in an IT company and he advised that we should never set the page file on the OS drive but on an alternate drive wherever possible. It would be really helpful, if you can guide me here or at least point me to the right resources so that I can read about paging and best practices of paging. Cheers,

    Read the article

  • Auto switching between wired and wireless connections

    - by Joe
    How about this situation. Our business deals a lot with medical information. And some of our clients have demands based off HIPPA, etc. There is one now where they do not want an employee to have both wired and wireless on at same time. If the wireless is on the wired needs to be turned off automatically and vice versa. However, this can not be up to the end user to manage! I have looked for third party applications and only have found http://www.wirelessautoswitch.com Does anyone know of anything else that is out there? Or possible something that can be done via group policy, etc.?

    Read the article

  • Hyper-V management remotely

    - by Péter
    I'll tell you in advance that I'm newbie in the topic. I have a Win8 (Home) machine with Hyper-V installed behind a router. The router has a public IP and a domain attached. I have another Win8 (Work) machine also installed Hyper-V. I want to access to my home Hyper-V via Hyper-V Manager so I can manage my virtual machines from work. I found this article but I don't know if it's applicable to me. I thought that a simple port forwarding should work and I only need to do is grant the Work HV manager my domain and the port I choose and if it's pop a login form I only need to fill the user data of my Home computer? How can I solve this? My thoughts revolve around: - Port forwarding - set domain+port and set my home user - Set up a VPN and use the local ip address of my home computer (it looks like a little cumbersome and my router only support PPTP) I'm open to any other solution too. Thanks, Péter

    Read the article

  • How to read ebooks in continuous scrolling mode and save highlights?

    - by Peter Salazar
    I'm looking for a way to manage my academic workflow for reading e-books in .epub or .mobi formats on OSX. My requirements: - continuous scrolling mode - ability to highlight text (e.g. in yellow, using a single keyboard shortcut) - ideally, the ability to make annotations as well Amazon Kindle reader for OSX offers annotating, but not continuous scrolling mode. Calibre offers continuous scrolling mode, but does not allow highlighting or annotating. Is there a solution that will allow me to do this? I'm also open to workarounds, e.g. using Calibre to convert to HTML, then reading the book in a browser---but I would still need the ability to highlight using a single keystroke.

    Read the article

  • Auto switching between wired and wireless connections

    - by Joe
    How about this situation. Our business deals a lot with medical information. And some of our clients have demands based off HIPPA, etc. There is one now where they do not want an employee to have both wired and wireless on at same time. If the wireless is on the wired needs to be turned off automatically and vice versa. However, this can not be up to the end user to manage! I have looked for third party applications and only have found http://www.wirelessautoswitch.com Does anyone know of anything else that is out there? Or possible something that can be done via group policy, etc.?

    Read the article

  • How to disable System service from listening on port 80 in Windows Server 2003

    - by Miky D
    I'm trying to install a service on a Windows Server 2003 machine which is supposed to listen on port 80 but it fails to start because some other service is already listening on that port. So far I've disabled the IIS Admin service and the HTTP SSL service but no luck. When I run netstat -a -n -o | findstr 0.0:80 it gives me the process id 4 as the culprit, but when I look at the running processes that process id points to the "System" process. What can I do to get the System process to stop listening on port 80 and get my service to listen instead?

    Read the article

  • Monitor sleep in Windows XP does not work. Is there an add on to force it?

    - by bert
    I have an EEE box with XP Home and a DVI connected TFT, and it does not sleep the monitor. It only starts working after I go to the energy control panel and turn it off and on again. Then the timed sleep works for the current session of computer use. After a shut down and boot next day, sleep does not work again (it is still set in the preferences however). What can hold the sleep function from working? Are there issues with Skype or MSN? Is there a utility to be less critic to processes interrupting sleep and offer a more reliable monitor sleep function for XP?

    Read the article

  • Cloud services, Public IPs and SIP

    - by Guido N
    I'm trying to run a custom SIP software (which uses JAIN SIP 1.2) on a cloud box. What I'd really like is to have a real public IP aka which is listed by "ifconfig -a" command. This is because atm I don't want to write additional SIP code / add a SIP proxy in order to manage private IP addresses / address translation. I gave Amazon EC2 a go, but as reported here http://stackoverflow.com/questions/10013549/sip-and-ec2-elastic-ips it's not fit for purpose (they do a 1:1 NAT translation between the private IP of the box and its Elastic IP). Does anyone know of a cloud service that provides real static public IP addresses?

    Read the article

  • How do you apply development practices like version control, testing and continuous integration/deployment to system administration?

    - by arex1337
    Imagine you're going to manage a number of servers with a number of different services that's used by a number of people. Now say you want to reconfigure or replace some software on one of those servers. Obviously you don't want to work on servers that are in production. If this was a code change, as a developer, I would make the change on my local development machine, test it locally and commit the change to a version control system. The changes could then be deployed in a staging environment, tested further and finally deployed in a production environment. It would also be easy for me to roll back, if necessary. Generally, or specifically, how do you achieve this in system administration? (The first thing that comes to mind is to use virtual machines and put virtual machine images in version control, but I'm sure there is a lot of literature and clever solutions I'm not presently aware of.)

    Read the article

  • Get tortoisesvn to give me filenames with build number in the filename

    - by EricJLN
    I am on a Windows 7 box, and I have tortoisesvn on my machine. After getting a little familiar with svn and tortoisesvn on a code repository, I set up a local repository to manage revisions of some word and powerpoint documents. I want to figure out some scripted way to output a set of files with the build/revision number embedded in the filename. I will then email the files to some business people to review. For example, say I have a group of files in my working directory: PresentA.pptx PresentA-notes.docx PresentB.pptx and TortoiseSVN repo browser tells me that I am currently at revision 21 for PresentA.pptx and PresentA-notes.docx but at revision 25 for PresentB.pptx, I would like some way to get 3 files with the following names: PresentA-r21.pptx PresentA-notes-r21.docx PresentB-r25.pptx Alternatively, if revision 25 is the current value for the repository, having all the names appended with -r25 would work, too.

    Read the article

  • SQL Server backup and restore process

    - by Nai
    Just wondering what backup processes you guys have. I am currently operating a weekly full database backup with daily differential backups. My understanding is that with such a set up, the difference between Full recovery mode and Simple recovery mode is that with Full recovery mode, I will be able to use the transaction logs to rollback my DB to a specific point in time having applied the latest differential backup. Assuming that in my scenario, the last differential backup serves as my last and ultimate 'save point', I don't see a need to rollback my DB even further back using the logs. This brings me to my question: Is there any additional benefits to be had using a Full recovery mode for my current backup process?

    Read the article

  • Hiera + Puppet classes

    - by Amadan
    I'm trying to figure out Puppet (3.0) and how it relates to built-in Hiera. So this is what I tried, an extremely simple example (I'll make a more complex hierarchy when I manage to get the simple one working): # /etc/puppet/hiera.yaml :backends: - yaml :hierarchy: - common :yaml: :datadir: /etc/puppet/hieradata # /etc/puppet/hieradata/common.yaml test::param: value # /etc/puppet/modules/test/manifests/init.pp class test ($param) { notice($param) } # /etc/puppet/manifests/site.pp include test If I directly apply it, it's fine: $ puppet apply /etc/puppet/manifests/site.pp Scope(Class[Test]): value If I go through puppet master, it's not fine: $ puppet agent --test Could not retrieve catalog from remote server: Error 400 on SERVER: Must pass param to Class[Test] at /etc/puppet/manifests/site.pp:1 on node <nodename> What am I missing? EDIT: I just left the office but a thought struck me: I should probably restart puppet master so it can see the new hiera.conf. I'll try that on Monday; in the meantime, if anyone figures out some not-it problem, I'd appreciate it :)

    Read the article

  • "Slave" user accounts in GNU/Linux

    - by Vi
    How to make one user account to be like root for some other user account, e.g. to be able to read, write, chmod all it's files, chown from this account to master and back, kill/ptrace all it's processes and to all thinks root can, but limited only to that particular slave account? Now I'm simulating this by allowing "master" user to "sudo -u slaveuser" and setting setfacl -dRm u:masteruser:rwx ~slaveuser. It is useful as I run most desktop programs in separate user accounts, but need to move files between them sometimes. If it requires some simple kernel patch it is OK.

    Read the article

  • Laptop turns off after 20 minutes of use

    - by Christoph
    My laptop a sony vaio VGN-NW11S http://www.trustedreviews.com/Sony-VAIO-VGN-NW11S-S---15-5in-Laptop_Laptop_review. Everytime i turn it on, in safe mode or not, if i try to open an application i.e. run a process such as google chrome or event viewer, defrag, virus scan, it completely turns off without warning, nor giving a trace of events the next time I switch it on. Apart from that, I had worries it might be my battery or power supply but I dont think it is that, I took the laptop apart cleaning fans etc. and have ordered some cpu paste as I checked to see the condition of the processor. I will post to see if re-applying the paste works. One more thing, when the heavy processes kick in, the fan starts to make a lot of noise, maybe trying to cool down the CPU? Any ideas on what else it could be and what I could do to test what is wrong?

    Read the article

  • How to execute a shell script on startup?

    - by vijay.shad
    I have create a script to start a server(my first question). Now I want it to run on the system boot and start the defined server. What should I do to get this done? My findings tell me put this file in /etc/init.d location and it will execute when the system will boot. But I am not able to understand how the first argument on the startup will be start? Is this predefined somewhere to use start as $1? If I want to have a case startall that will start all the servers in the script, then what are the options I can manage. My Script is like this: #!/bin/bash case "$1" in start) start ;; stop) stop ;; restart) $0 stop $0 start ;; *) echo "usage: $0 (start|stop|restart)" ;; esac

    Read the article

  • Security when, ssh Private keys are lost

    - by Shree Mandadi
    Cant explain my problem enough with words, Let me take an example.. and please multiple the complexity by a 100 for the Solution. User-A has two ssh private keys, and over time has used this public key on a number of servers He lost one of them, and has created a new pair. How does User-A, inform me (Sys Admin), that he has lost his key, and How do I manage all the servers to which he had access to (I do not have a list, of all Servers that User-A has access to). In other words, How do I recall, the public key associated with this Private key. REF: In the LDAP based Authentication, All Servers would communicate with a single Server repository for Authentication, and If I remove acess or modify the password on the Server, all Systems that use this LDAP for Authentication are secured, when User-A loses his password..

    Read the article

  • dig and dig -x are answering different

    - by erdemkeren
    I don't want the name provider to manage my records. I want to handle it myself. So I installed bind9 and made some configurations reading some articles and following some examples. bind didn't show any error after I created/edited the required files but; When I write `dig www.foo.com, I see the IP of the advertisement page of my name provider. But when I write dig -x server_ip_address; I see the name I purchased. What am I doing wrong? Can't a server be the nameserver of it's own? Is it a must to configure the records from the company I bought the name from? By the way, I realised that, my previous question was not clear, I deleted it, and asking the same question in a different way.

    Read the article

  • Should I split my website into different servers

    - by Nyxynyx
    I have a website where a user uploads photos, the photos gets resized and thumbnailed, and stored on the server. At the same time, there are some INSERTS into a MySQL table regarding the photo uploaded (like description, user id etc). The site currently runs off a managed VPS, and I love the support it provides. However it is expensive to store the many small photos and the resizing and thumbnailing processes do cause spikes on the app performance. (Amazon S3 is pretty expensive, especially considering the costs for uploading many small files) Question: Will it be a good idea to move the image processing operations and image storage to another server which is an unmanaged dedicated server with a much lower cost/gb and keep the current VPS for its 24/7 support and hosting the webapp? Or should I move the entire site to the dedicated server? VPS Specs 16 cores 2.4GHz (E5620) 1GB memory 60GB Storage 3.5TB transfer $43/mth Managed (24/7) Dedicated Specs i3 2130 2 cores 3.4+ GHz 16 GB DDR3 2 x 1TB SATA2 storage 15 TB transfer $79/mth Unmanaged (Weekdays support) Software used Apache PHP MySQL Solr PostgreSQL ImageMagick

    Read the article

  • How to choose a web server for a Python application?

    - by Phil
    Information and prerequisites: I have a project which is, at its core, a basic CRUD application. It doesn't have long running background processes which it forks at the beginning and talks to later on, nor does it have long running queries or kept alive connection requirements. It receives a request, makes some queries to the database and then responds. In order to serve static files and cachable files fast, I am going to use Varnish in all cases. Here is my question: After reading about various Python web application servers, I have seen that they all have their "fans" for certain, usually "personal" reasons, which got me confused since each usecase differs from the next. How can I learn about the core differentiating factors of Python web servers (in order) to decide how suitable they are for my project and if one would be better than the other? What are your (technically provable) thoughts on the matter? How should I choose a Python web server? Thank you.

    Read the article

  • What are the benefits of running a app server in user space, like Unicorn, as opposed to as sudo?

    - by dan
    I've been using Phusion Passenger + Rails/Sinatra for a lot of projects. Passenger runs under the main Nginx or Apache process. But I'm interested in Unicorn, partly because it runs in user space. You just set up Nginx to proxy_pass requests to a unix socket that is connected to Unicorn processes that you fire up under a normal user account. Is there anything to be said as far as advantages and disadvantages of these two alternative approaches to running an web app? I mean in terms of ease of administration, stability, simplicity, etc.

    Read the article

  • Using old RAID configured disk after new disk has been used in the controller

    - by Narendra
    I have Dell Poweredge T100 server with Dell SAS 6 and two hard disk on RAID 1. Last week the server died including one RAID 1 hard disk. We sent the server for repair and the problem with PSU was fixed. But the repair guys also checked the RAID controller by configuring new RAID with their test hard disk. Now if I install one working RAID 1 disk and one new disk, will the RAID controller let me continue my old RAID 1 and resync the new disk and continue? What I fear is the RAID controller will want the test hard from repair guys. Thus I have to re configure RAID 1 forcing me to wipe the working disc. If so, I've to backup the working disc, reconfigure RAID 1 and reinstall? Or is there better way? Note: I'm using DELL SAS confiugratio utility to manage RAID. (Press CTRL+C after BIOS)

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >