Search Results

Search found 6052 results on 243 pages for 'manage'.

Page 9/243 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Dell fumbles OpenManage installation process, forgets to write documentation?

    - by bwerks
    Hi all, I'm setting up a Dell PowerEdge 2950 for a small business, and I've just spent a while with Dell OpenManage Server Administrator 6.2, trying to clear the installation process of errors before I execute it. Right now I'm getting the following warning from the installer. The installer has detected that the HTTPS listener is not configured for Windows Remote Management. You can either configure the HTTPS listener before installing Remote Enablement, or install Remote Enablement now by selecting the "Custom" installation screen and configure the HTTPS listener later. See the "Remote Enablement Requirements" section in the "Dell OpenManage Installation and Security User's Guide" for information on configuring the HTTPS listener. Note: Remote Enablement is required to manage this system from a remote Server Administrator Web Server and is applicable only for those systems that support Server Instrumentation. Click here to configure HTTPs Listener for Windows Remote Management. The italicized line is a link, which executes...something...via cmd, and doesn't seem to help the problem. Not knowing exactly what to do here, I consulted the documentation. I read through the Setup and Administration section of the User's Guide, but all that it contained was a weird primer on role-based security and some SNMP stuff. The next section skips installation entirely and moved on to features of the suite. Thinking myself crazy, I consulted the readme, which told me that for installation I should consult the "Dell OpenManage Installation and Security Version 6.2 User's Guide" which not only doesn't exist in the documentation, but also not in all of google? Soo yeah, if anyone is familiar with this problem, drop me some knowledge!

    Read the article

  • pros and cons with server management gui tools to manage linux web servers

    - by ajsie
    i have stumbled upon these GUI tools that could help you manage your linux server through a web interface. ebox, webmin, ispconfig, zivios, ispcp, plesk, cpanel etc. i wonder what the pros and cons are with these solutions. a lot of people is saying that they are not as good as using pure command line (ssh) to manage your server. but i think thats yet another "linux are for advanced users" talk. i agree that some things may only be done with the command line by editing directly in the configuration files. but i don't really want to do that every time and for everything. its like not having phpmyadmin for managing mysql. it would be a pain in the ass right? so if one wants to throw up a web server serving a php site oneself developed and wants all the usual stuff up and running (mysql, phpmyadmin, svn, webdav etc) is these tools the right way to go?

    Read the article

  • Automatically create an admin user when running Django's ./manage.py syncdb

    - by a paid nerd
    My project is in early development. I frequently delete the database and run manage.py syncdb to set up my app from scratch. Unfortunately, this always pops up: You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): Then you have supply a username, valid email adress and password. This is tedious. I'm getting tired of typing test\[email protected]\ntest\ntest\n. How can I automatically skip this step and create a user programatically when running manage.py syncdb ?

    Read the article

  • How Manage Big Linq DataContext ?

    - by Rev
    Hi The major problem in .net programs is "How manage memory for best performance". so Microsoft use garbage collector in .net and with that, we don't need to do something for managing memory(or better say we can use GC easily) But when you develop big project(business app), you make too many tables and database for your own project. so if you use Linq-to-sql, we must build DataContext include hundred or more tables. That make problem for program when you create an object from datacontext, that object give big amount of memory. also we cant divide datacontext to datacontexts(cuz relation between tables) so "How manage datacontext and memory"?

    Read the article

  • Recommended way to manage persistent PHP script processes?

    - by BigglesZX
    First off - hello, this is my first Stack Overflow question so I'll try my best to communicate properly. The title of my question may be a bit ambiguous so let me expand upon it immediately: I'm planning a project which involves taking data inputs from several "streaming" APIs, Twitter being one example. I've got a basic script coded up in PHP which runs indefinitely from the command line, taking input from the Twitter streaming API and doing very basic things with it. My eventual objective is to have several such processes running (perhaps daemonized using the System Daemon PEAR class), and I would like to be able to manage them from some governing process (also a PHP script). By manage I mean basic operations such as stop/start and (most crucially) automatically restarting a process that crashes. I would appreciate any pointers on how best to approach this process management angle. Again, apologies if this question is too expansive - tips on more tightly focused lines of enquiry would be appreciated if necessary. Thanks for reading and I look forward to your answers.

    Read the article

  • Manage network database with SQL Management Studio 2005 express

    - by johntotetwoo
    hi there mates :) i'm new with database and database handling and i was wondering if it is possible to manage network database with SQL Server Management Studio Express 2005. I have here two PCs in my home and were connected via router. PC 1 has SQL Server 2005 Express and SQL Server Management Studio 2005 Express while PC 2 has only SQL Server 2005 Express. Can it be possible that PC1's management studio can manage PC2's server? im glad if you could help me with this. Happy New YEar!

    Read the article

  • How to manage Media Center Extender schedule from Media Center in Windows7

    - by Pangea
    I want to have several machines set up as digital signage displays and pull media (.pptx, video, images etc) from a centralized place. This is easy enough, but what I really need help doing is managing which extenders play specific media and the schedule from a Windows 7 x64 client hopefully within Media Center but I'm open to another management app. Is it possible to dictate the time and location each media is pushed to? I can use either a full machine running a full Win7 OS or a simple MC extender.

    Read the article

  • Delegation of Solaris Zone Administration

    - by darrenm
    In Solaris 11 'Zone Delegation' is a built in feature. The Zones system now uses finegrained RBAC authorisations to allow delegation of management of distinct zones, rather than all zones which is what the 'Zone Management' RBAC profile did in Solaris 10.The data for this can be stored with the Zone or you could also create RBAC profiles (that can even be stored in NIS or LDAP) for granting access to specific lists of Zones to administrators.For example lets say we have zones named zoneA through zoneF and we have three admins alice, bob, carl.  We want to grant a subset of the zone management to each of them.We could do that either by adding the admin resource to the appropriate zones via zonecfg(1M) or we could do something like this with RBAC data directly: First lets look at an example of storing the data with the zone. # zonecfg -z zoneA zonecfg:zoneA> add admin zonecfg:zoneA> set user=alice zonecfg:zoneA> set auths=manage zonecfg:zoneA> end zonecfg:zoneA> commit zonecfg:zoneA> exit Now lets look at the alternate method of storing this directly in the RBAC database, but we will show all our admins and zones for this example: # usermod -P +Zone Management -A +solaris.zone.manage/zoneA alice # usermod -A +solaris.zone.login/zoneB alice # usermod -P +Zone Management-A +solaris.zone.manage/zoneB bob # usermod -A +solaris.zone.manage/zoneC bob # usermod -P +Zone Management-A +solaris.zone.manage/zoneC carl # usermod -A +solaris.zone.manage/zoneD carl # usermod -A +solaris.zone.manage/zoneE carl # usermod -A +solaris.zone.manage/zoneF carl In the above alice can only manage zoneA, bob can manage zoneB and zoneC and carl can manage zoneC through zoneF.  The user alice can also login on the console to zoneB but she can't do the operations that require the solaris.zone.manage authorisation on it.Or if you have a large number of zones and/or admins or you just want to provide a layer of abstraction you can collect the authorisation lists into an RBAC profile and grant that to the admins, for example lets great an RBAC profile for the things that alice and carl can do. # profiles -p 'Zone Group 1' profiles:Zone Group 1> set desc="Zone Group 1" profiles:Zone Group 1> add profile="Zone Management" profiles:Zone Group 1> add auths=solaris.zone.manage/zoneA profiles:Zone Group 1> add auths=solaris.zone.login/zoneB profiles:Zone Group 1> commit profiles:Zone Group 1> exit # profiles -p 'Zone Group 3' profiles:Zone Group 1> set desc="Zone Group 3" profiles:Zone Group 1> add profile="Zone Management" profiles:Zone Group 1> add auths=solaris.zone.manage/zoneD profiles:Zone Group 1> add auths=solaris.zone.manage/zoneE profiles:Zone Group 1> add auths=solaris.zone.manage/zoneF profiles:Zone Group 1> commit profiles:Zone Group 1> exit Now instead of granting carl  and aliace the 'Zone Management' profile and the authorisations directly we can just give them the appropriate profile. # usermod -P +'Zone Group 3' carl # usermod -P +'Zone Group 1' alice If we wanted to store the profile data and the profiles granted to the users in LDAP just add '-S ldap' to the profiles and usermod commands. For a documentation overview see the description of the "admin" resource in zonecfg(1M), profiles(1) and usermod(1M)

    Read the article

  • Manage Upload Permissions, SFTP & Linux

    - by John R
    I'm new to Linux. I am working with a Redhat 5.5 server and am using a Java-based SFTP script that will allow multiple users to upload text files to a server. I am undecided if each user will have a separate directory or if I will use a naming convention that includes their customer ID. The files include some personal information about their LAN settings, so I prefer to use SFTP as apposed to FTP. It is my understanding that SFTP is encrypted (Also, I have a Java class configured to upload via SFTP, so I prefer not to switch protocols unless their is a very-good reason). The prototype is for a system that will support large numbers of customers and the thought of continually adding and removing clients through the command line seems highly impractical. (Again, I am new_to/learning Linux and Redhat). What are normal conventions for giving multiple users permission to SFTP upload files with a unique username and password for each.

    Read the article

  • How to manage groups and users in Jenkins

    - by Michael
    I'm trying to use role based security plugin in Jenkins, but i'm not sue i am using it right. I've decided to go with jenkin's own user database as a security realm instead of LDAP. i'm adding the users one by one. Now in the Assign Roles screen, i have global roles like administrator, read only etc... and i have project specific roles like prod_a_developer, prod_b_developer... For each user, do i have to both assign one of the global roles for him and also assign a specific project role ? Also, how do i assign a user to a group ? instead of assigning each user a global role i want to assign a group a global role. not so trivial, Can someone please help me ? Thanks.

    Read the article

  • How to manage configuration & automatic rollout of 20 virtual machines

    - by Lucas Meijer
    I have a TeamCity build server, with about 20 "build agents", both Windows and MacOS machines. Often, I need to install a newer version of XCode or VisualStudio or some other tool. Having to do this on all machines manually is boring and error prone. I'm trying to find out what is the best way to achieve the following: make it easy to change a system configuration, without having to do it on all machines manually. make it easy to add a new machine to the group. ensure the machines are as identical as possible The jobs these machines are executing is relatively heavy, fully consuming 8 cores, and be very heavy on IO. It's fine if the solution includes spending money.

    Read the article

  • Manage MS SQL server over SSH (putty)

    - by adopilot
    There is great implementation of PuttY for Symbian S60 devices, and last versions of Nokia phones with full QWERTY keyboard offers comfort for using Putty and SSH goodies. I start wondering is there a way to access MSSQL over SSH and send T-SQL commands. I am able to connect my router which running on FreeBSD OS, also other CentOS server with MYSQL are able to me over SSH. I want to try access MS-SQL-2005 from my mobile phone, Probably this is not going to have daily use, it is just for geeky and justification to myself: why I had to buy so expensive a toy

    Read the article

  • How to manage service failover?

    - by Jader Dias
    I am using Windows Network Load Balancing to keep my apps available even when one of the servers is down. But when all servers are up, but one instance of a service in one of them is down, I would like to not send requests to it, because those requests will be lost. Is there any solution that addresses this problem?

    Read the article

  • Howto manage website and user permissions with apache on Mac os X

    - by Sander Versluys
    I have lots of different websites in a Development directory in my home dir. While developing, files get saved under my username but websites configured under apache it's permission need to be set as _www user and group. What's the best way to handle this? Do I run apache under a different user/group? Do I run my development tools under a different user? Do add myself to the _www group? (seems like it doesn't work btw) I've just switched to a mac and I'm trying to find a smooth development workflow, so it would be best if i could just run the necessary tools, save some files and be able to test the website without much hassle. Thanks!

    Read the article

  • How OS manage running programs

    - by Vit
    Hi, since windows is multitasking OS, it must somehow switch between processes. But how is it done? Does windows include some kind of breakpoints into running programs, which should switch to other process? Becouse when program is executed, it takes control of CPU. So to enable multitasking OS must include some break instruction into the program. Am I right?

    Read the article

  • Apache2 Manage Server default

    - by Jaime E. Valdez
    I'm trying to setup two domains correctly. I have some issues I hope you can help me. Site one's conf: <VirtualHost myipaddress:80> ServerName www.domain1.com ServerAdmin [email protected] DocumentRoot /home/domain1/public_html </VirtualHost> My other domain conf is: <VirtualHost myipaddress:80> ServerName www.domain2.com ServerAlias *.domain2.com domain2.com ServerAdmin [email protected] DocumentRoot /home/domain2/public_html </VirtualHost> The default site is disabled. The problem is that when accessing "domain2.com" from my browser, it always redirects to "www.domain1.com". It only works when I excplicitly access "www.domain2.com". I have also other domains like "domain1.net", "domain1.info" pointing to my server but at this moment are not configured either setup on Apache yet I can access from browser and always accessing to "www.domain1.com". By the way is there any possible configuration over Apache to handle IP only, I mean if I type "http://myipaddress/" I get the "www.domain1.com"... Arrgh.

    Read the article

  • Manage song metadata on CentOS from command line

    - by Puddingfox
    I am making a simple Pandora.com alternative for myself and a few friends where the user can upload his/her songs and listen to them anywhere. My intent is to make a lightweight, simple player in HTML5 so all the user needs is a current Firefox or Chrome to use it. I have set it up so that all uploaded songs get converted to .ogg and added to a database but I also want some metadata (not sure if that is the correct term) for the songs to be stored in the database so the player can tell the user what song he/she is listening too. I know there are several GUI tools for managing the title/artist/album info for songs but I'm having trouble finding any good ones I can use from the command line. If the song has the information already in the file, I think I can use mplayer to retrieve the information but it would be really great if there was one that would look up the song information online. I don't mind interfacing with an API (would be pretty interesting actually). Do you guys have any suggestions?

    Read the article

  • How should I manage VPS email?

    - by Xeoncross
    I have been slowly learning how to run a linux VPS for a while now. Since I build websites I'm confident with running and securing a web server like nginx... or at least there haven't been any casualties yet. However, email scares me. Almost all websites require email to communicate with users. Most of the time email is only needed on my sites during registration as a means of verification. I hardly ever need to accept incoming mail back. Nevertheless, my lack off understanding of how email servers can be abused is worrying me. Not only do you need to secure email servers - you also have to prove to the world that your emails are legit and constantly fight against being blacklisted. Insuring my emails 'good name' is not something I want to devote my life too. What should someone like me do to send emails from my VPS? Should I look for a company to send email through that can worry about this for me? Should I just use google apps until my sites are large enough to worry about? Or is all this just ignorant fear and running your own email server (that actually works) really is easy?

    Read the article

  • SCCM: Manage Network Config Baseline

    - by Malnizzle
    I'm new to SCCM and trying to figure out the best way to do things. I want to create a config baseline for defining network settings (e.g. default gateway) for a collection of servers. Is finding the applicable reg key and deploying the config that way the best method, or is there a configuration pack that includes pre-determined values, maybe a third option I am un-aware of? Thank you!

    Read the article

  • How to manage iowait over cifs?

    - by Silvia
    For backup purposes we have Cifs file Server running that contains encrypted containers for backing up the more sensitive data. The container is mounted with cryptsetup and loop as a local filesystem and the rsync is used for backups. Because the Cifs server is not the fastest machine ever built, running the rsync process results in an iowait on the servers running the backup which in turn drives Nagios into an email frenzy. The question is, how do reduce the iowait on the server? Configuring Nagios to not report seems more like a workaround then a solution. Stretching the backups over different time intervals is already done with little effect and spending money is also not an option because apparently, we are talking about a "non-critical system".

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • Linux - How to manage the password of root?

    - by Jonathan Rioux
    We have just deployed a couple of Linux server. Each sysadmin will have his own account on the server (i.e.: jsmith), and will connect using SSH with a certificate which will be put into the "authorized_keys" file in their home directory. Once connected on the server, if they want to issue an elevated command, they will do like: sudo ifconfig They will then enter the root password. What I would like to know now are the best practices in managing that root password. Should I change it periodicaly? And how do I share that new password with the sysadmins? **Of course I will disable the root logon in SSH.

    Read the article

  • TFS 2010 and SCVMM to manage snapshots

    - by ELSheepO
    I have a VM with TFS 2010 and SCVMM 2008 installed on (I know your told not to do that). I have SCVMM linked to the host and thats working good, and have configured the scvmm on the lab management tab on TFS. My question is, how can you take/roll back snapshots automatically? ie install base take snapshot install patch run tests take snapshot go back to snapshot taken at step 2 Thanks in advance.

    Read the article

  • How to set up a staging apt repository to securely manage upgrades

    - by andreash
    Hello, I would like to be able to run automatic apt-get upgrade (once per hour) on our servers (Ubuntu 10.04), so that I don't have to do it manually on all of them (about 15). However, for production machines, that's not a good idea ... So here's my idea: Set up a local repository for all 'approved' updates for critical packages. I would then push updated packages from upstream to our local repo after I tested them, and all servers could automatically (apt-cron?) upgrade from this repository. So my question is this: How do I configure apt on the clients so that they use the local repository only for all packages which exist on the local repository, and the upstream one for all other packages? Does this actually make sense? Or am I missing something? Anyways, thanks for your insight! Andreas.

    Read the article

  • How to manage linux workstations with policies

    - by redknight
    I am going to be administering a small network of linux based workstations for a charity institution (not all have the same distro- some are ubuntu and some are fedora). Is there something in Linux that is similar to group policy in windows?. For example I would like to standardize the wallpapers - have only firefox as a browser - default VLC as the media player etc etc. Thank you, any suggestions are very appreciated.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >