Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 298/401 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Looking for an application to record audio and video on a linux "embedded" device

    - by Luke404
    I am working with a linux x86 device with limited CPU resources (as a prototype we just use a pentium-m netbook). We'd like to record video from one V4L2 device (we'll probably end up using just USB Video Class devices like all modern webcams) and one audio stream from an ALSA source. The thing will not have screen and keyboard, and obviously no X11 environment. Goals are: do as little work as possible to cope with little cpu resources - for example I'd like to record video in the native MJPEG I get out of the UVC devices encoding audio to MPEG3 Layer-2 (aka mp2) is ok since it let us save a lot of space (compared to raw pcm samples) and does use little cpu power I don't mind loosing some video frames here and there (UVC devices do that) as long as I can get audio and video streams syncronized not require user input to start the thing (a python script takes care of initialization, startup, shutdown, etc...) be able to open the resulting files for postprocessing without too much effort (ie, if mplayer or vlc can play it, it's fine) So far the only app I found that could be started from command line and record V4L2 video + ALSA audio is mencoder but I'm having some difficulties with it. It should be able to do that but I cannot record audio and video together - just one of the two. And if I use two different processes to record to two different files I have no means to get them in sync (audio is more or less always correct, but video framerate will vary over time and it seems to lack timestamps to correctly play it back to the correct time). Long story short, how do you record an unconverted MJPEG stream (from an UVC device) and an audio stream (from an ALSA device, possibly encoding to any standard format) using a command line tool, to a single file (MPEG or any other container), keeping audio and video in sync?

    Read the article

  • Red Hat server minimal install

    - by chmeee
    In a farm of virtualized Red Hat servers, there's the need to install a minimal system for security reasons. Minimal installs have serveral advantages (even no security related): Lees exposure to vulnerabilities (if you don't need it, don't install it) Better update process (less packages to update, less probability of breaking the system) Better performance (no unneeded daemons or processes) The less software you have the easier it is to harden the system Unfortunately, this is not easy because the "Minimal Installation" on Red Hat contains lots of unnecessary packages. There is an added challenge as the farm is running Oracle iAS. I've been told that iAS has dependencies with local graphical envieronment. So finally every server in the farm has gnome, X, etc. I've been searching the web and one solution seems to be making a kickstart script that will intall only the necessary packages. But I find this difficult and have several doubts about how to maintain the system dependencies afterwards. How do you install minimal Red Hat servers? Is it Ok to use kickstart or will I have dependency problems in the installation or in updates? Is there any way to avoid installing the graphical environment for iAS?

    Read the article

  • Application for time and projet management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • High disk I/O - jbd2/sda2-8 process

    - by Evan Hamlet
    I have run a file server on a CentOS 5.8 final server. My only concern at the moment is what appears to be intermittent but continuous high disk I/O activity causing a general slowdown because of jbd2/sda2-8 process. jbd2/sda2-8 is making use of /dev/sda2, which is the 2nd partition of the first harddrive (IE: root partition). More info: using "iotop" the culprit appears to be "jbd2/sda1-8" making writes every second, which appears to be a kernel process associated with journaling on the ext4 filesystem, if my googling around is correct. I see "jbd2/sda2-8" appearing here every now and then, but certainly not every 3 seconds.. when idle, it appears about 1 or 2 times per minute. When I'm using the system, it appears more frequently. ATOP results: http://grabilla.com/02b14-8022db2e-4eb9-4f10-8e10-d65c49ad7530.png IOTOP results: http://grabilla.com/02b14-cf74b25d-4063-4447-9210-7d1b9b70e25b.png HTOP results: grabilla. com/02b14-ad8cad0e-89b0-46d3-849d-4fd515c1e690.png jbd2/sda2-8 is the processes I see with iotop making writes on disk even though it's not in use at all. Does someone has any idea how could I solve the high disk usage caused jbd2/sda2-8 process?

    Read the article

  • Ubuntu Wubi "drive" failure; mount drive in XP?

    - by 618034
    Hi there, I installed the Wubi distribution of Ubuntu on a separate partition (which is silly, since why do I care if Windows can still manage the partition?) a few months back; it was pretty awesome, until Linux hosed. At this point, I can get Ubuntu to boot if I try really hard through grub, but once it does start, the screen is hosed, so no dice. At this point, I'd like to wipe it all and start over, but I need to get some stuff off the "disk". The Wubi install makes this difficult, since the "disk" is a flat file on an NTFS partition. I've done just about everything I can think of — I renamed the virtual disk .iso, mounted it with VirtualCloneDrive, then used whatever magic EXT3 (EXT4?) readers I could dig up on the Internet to parse the mount — but nothing's working. Can you offer any suggestions? The "disk" is currently in D:\ubuntu\disks\root.iso. Many thanks! (I may be high-latency at the moment, apologies if I don't address follow-ups quickly)

    Read the article

  • Cable management techniques

    - by cornjuliox
    How do you manage the giant jungle of cables behind your PC? When you have 2 or more PCs next to each other, you wind up with this giant mess cables that's a pain in the neck to clean especially when both computers are running 24/7 and any fidgeting with the cables is likely to cause data loss and/or angry users. So far I've tried masking tape, cable ties and plain old string but none have been very effective. The masking tape kept the cables in place, but over time they ended up leaving this awful sticky residue on the sides of the cables that just won't come off gets all over your fingers and is horrible horrible horrible. I have nightmares about that stuff. We used cable ties and 'folded' up some of the longer cables so that they weren't any longer than they needed to be, but this meant that the position of some of our devices like the keyboard and the mouse were essentially 'fixed' until we removed the ties. The string didn't work much differently and required that we tie them properly or risk it coming loose. I would switch to a wireless keyboard and mouse, but I don't want to have to deal with the added expense of batteries, even the rechargable ones. Plus I don't want them to die on me at a crucial moment (happened to me once while playing Firearms _<). I know that there are people out there with home/office networks a thousand times more convoluted than mine, so

    Read the article

  • internet-based sync software that will keep running after Windows Live Sync stops doing PC-to-PC-syncs?

    - by Warren P
    According to the wikipedia page, Microsoft Live Sync will shortly stop offering the PC-to-PC sync service. There are lots of apps to sync two PCs on the same LAN, but I want to sync two PCs that are in different cities, across the internet, traversing two different NATs, and that requires some kind of service running in the internet that both connect into. There is already a few questions about syncing folders and files, but this is not a duplicate because none of them answer this basic question: Microsoft Live Sync works better than RSYNC, or any of the linked SYNC solutions in any of the "not really duplicates" because it works even when the two PCs have NAT and firewalls between them that forbid direct connectivity, because Windows Live Sync has a free always-on internet server that all the client PCs connect into. I'm looking for a FREE (no-fees) Microsoft Live Sync work-alike PC-to-PC sync solution that works between PCs and Macs, at least, as well as between PCs, and works behind NAT and firewalls at least as well as Microsoft's solution. (Note that Microsoft's solution makes only outbound socket calls to a microsoft server, so this solution must necessarily include a server-hub component that is hosted publically on a free site and which does not require that I set up and manage and pay for my own public internet hosting site) Hint: None of the answers in the linked duplicate are equivalent (PureSync,FreeFileSync,BestSync 2010,SyncButler,Comodo BackUp,QuickShadow,Gbridge) in that none of them work for the PC to Mac situation, where firewalls and nats prevent direct connection, or else they require money to be paid. When Microsoft Live Sync / Live Mesh finally kills direct PC-to-PC mode, the limitation will be that you will have to pay for more than 25 GB of cloud service, and you can then only sync PC #1 to PC #2 if you first sync to the cloud, then down to other clients. I can currently sync 100 gb of data from one computer to another, only temporarily "moving the data" through Microsoft's data servers without using up my Skydrive storage quota.

    Read the article

  • Apache crashes every 5min

    - by Simon
    I'm relatively new to server issues, having a site of mine that I started early in the year grow beyond my capabilities of managing it. I need help. I recently moved out of my shared hosting environment onto a dedicated virtual server from Mediatemple. Each week, I run a script that fetches data from my DB, fetches data from last.fm's API and then tweets information to Twitter. My server uses Virtuozzo and when the script runs, Apache crashes every 5min. I checked and saw that the 'kmemsize' parameter reaches its cap (its 13mb). I realise my problem. The MySQL process stays open for long while Apache needs to handle lots of incoming links (about 200 000 pageviews for that day according to my previous host's AWSTATS). Yes, I'm quite unexperienced in this, and I'm clearly killing the server with too many incoming links while it has to manage the updating of the DB. So that is the precedent: I want a few answers. 1) Why did my shared hosting environment not crash apache every 5min? It ran fine, the site only slowed a lot. Clearly, it must be the virtual container and the kmemsize limit? 2) Where do I go from here? Would a physical server (not a virtual container) encounter the same problems? I sent a support request to Mediatemple as well. I need all the help I can get.

    Read the article

  • Typical outbound port list for guest access?

    - by Steve
    I manage a weekly rental house that includes wireless Internet access. I've allowed all outbound ports on my router but my ISP has disabled my Internet access twice now because guests have downloaded (or served up) copyrighted content. So I'd like to institute some port filtering to discourage p2p sharing (see disclaimer below). But I don't want to inconvenience the 99.9% of folks who keep things above-board. My question is, what outbound ports are typically open for rental/hotel wireless Internet access, or where can I find such a list? TCP 80,443,25,110 at a minimum. Though my own email service uses 995 and 465 for SSL, some may use IMAP, I personally use SSH and FTP, so I'll open those. Roughly I figure I need to open access to privileged ports, and close 1024 & above. Is there a whitelist I should institute for commonly used high ports? And does it make sense to block UDP 1024 ? Disclaimer: I realize anyone replying to this message could circumvent the port filtering and share content to their heart's content. I do not need comprehensive p2p blocking, which requires more than a port whitelist. Anyone staying at the house shoulders the responsibility for their Internet use, per the rental contract. Also anyone savvy enough to circumvent the port filters would hopefully be savvy enough to use some sort of peer blocking, thereby preventing the ISP from taking down the service.

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • flowchart for debugging a slow/unresponsive server

    - by davidosomething
    So the server is slow: Roll back to the previous known working build - Success? Code problem - Fail? Go on. Ping ip address - Success? maybe a DNS problem, go on. - Fail? Server or connection problem, go on. Ping and tracert your domain.com from inside your network - previous success - fail: DNS problem - success? go on. - previous fail and: - Fail? Go on, could be you or network. - Success? Go on. Try it from outside your network (http://centralops.net/co/) - Fail? The server's network connection sucks. - Success? If inside network was fail, your network sucks. Check the server load: CPU/RAM usage. Is it overloaded? - Yes. Who's the culprit? Kill some processes/reboot. - No? Go on. what other steps should i add?

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Installing Joomla on Windows Server 2008 with IIS 7.0

    - by Greg Zwaagstra
    Hi, I have been spending the past while trying to install Joomla on a server running Windows Server 2008. I have successfully installed PHP (using Microsoft's web tool for installing PHP with IIS) and MySQL and am now trying to run the browser-based installation. Everything comes up green, I fill in the appropriate information regarding the site name, MySQL information, etc. and no errors are thrown. However, when I get to the step that asks me to remove the installation directory, I am unable to do so as Windows states it is in use by another program (I cannot fathom how this is true). Also, there is no configuration.php file that is created so if I were to manage to delete this folder I have a feeling that there would be problems. I was thinking there was some kind of a permissions issue and have set the permissions for IIS_IUSRS to have read, write, and execute permissions for the entire folder that Joomla resides in but this has not helped. Any help in this matter is greatly appreciated. ;) Greg EDIT: I decided to try and manually install Joomla by manually editing the configuration.php file. This has worked great and now I am certain there is some kind of a permissions issue going on because I am able to do everything that involves the MySQL database (create an article, edit menu items, etc.), but anything that involves making changes to Joomla installation's directory does not work (install plugins, edit configuration settings using the Global Configuration menu within Joomla, etc.) I have granted IIS_IUSRS every permission except Full Control (reading on the Joomla! forums shows that this should be enough for everything to work). This is confusing to me and I am quite stuck on this problem. EDIT 2: The bizarre thing is that in the System Info under Directory Permissions, everything turns up as Writable but then whenever I try to actually use Joomla to, for example, edit the configuration.php file using the interface, it says it is unable to edit the file.

    Read the article

  • Where to get glib-config for Kubuntu?

    - by Carl Smotricz
    I'm trying to compile Midnight Commander on a KUbuntu 9.10 (Karmic) box with no root access. I've set up a directory under $HOME, downloaded the mc source package and various stuff required for building, such as autotools. I've unpacked the CONTENTS of all those packages into this working directory such that I have the usual ./usr, ./lib, ./etc hierarchy. I manage to get configure through a lot of tests, but I can't seem to fool it into finding glib. checking for glib-2.0... checking for glib-config... no checking for glib12-config... no checking for glib-config... no checking for GLIB - version >= 1.2.6... no *** The glib-config script installed by GLIB could not be found *** If GLIB was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GLIB_CONFIG environment variable to the *** full path to glib-config. configure: error: Test for glib failed. GNU Midnight Commander requires glib 1.2.6 or above. My system has glib installed: /lib/libglib-2.0.so.0 /lib/libglib-2.0.so.0.2200.3 ... and I've also downloaded and unpacked the glib package into my working directory: libglib2.0-0_2.22.2-0ubuntu1_i386.deb libglib2.0-dev_2.22.2-0ubuntu1_i386.deb ... but still the elusive glib-config is nowhere to be found. It's not in any debian package for Karmic, either. So I'd appreciate any help getting over this hurdle. Please note, again, that I don't have root, so I can't just merrily apt-get stuff.

    Read the article

  • Server Manager from Windows 2008 to Hyper-V 2008 R2?

    - by Roger Lipscombe
    My workstation is running Windows Server 2008. I do not have local admin privileges. I have a Hyper-V Server 2008 R2 (i.e. Core+Hyper-V) box. On that box, I do have local admin privileges. I can Remote Desktop to the box; Hyper-V Manager works fine (outside of Server Manager). It's just that there are some things that are easier to do in Server Manager (partition disks, etc.) than at the command line. I'd like to use Server Manager on my workstation to manage the Hyper-V box. However: When I run Server Manager on my workstation, it prompts for elevation, and won't then let me connect to another server. If I attempt to run MMC and then add "Server Manager" as a Snap-in, it doesn't prompt me for the server name. Then it complains that I'm not an Administrator. It doesn't provide for connecting to another server. The Remote Server Administration Tools (RSAT) are for Windows Vista and Windows 7 RC. These don't install on Windows 2008.

    Read the article

  • How can I recover a Fedora 12 installation that is showing signs of disk errors?

    - by Bob Cross
    I am currently overseas (i.e., very far from my normal library of tools) and my primary machine that would normally act as the data server in the performance test that we're trying to run is failing to boot to Fedora 12 properly. This is a machine that, as of yesterday, was booting fine. However, this morning, very strange portions of the boot process were complaining with messages such as "unexpected 0x0 in rpcbind" and "bad file descriptor" (I don't have the error in front of me - scavenged a windows installation to get onto serverfault). Eventually, the boot hung for a long time at the NFS service and then brought up what looked like the KDE login screen but neither the mouse nor keyboard functioned. In olden days, I would try to get to a point where I could manage to run fsck and pray that the bad sectors would come back into alignment just long enough for me to scrape the critical data off of the machine. However, now that we live in the future, it seems like our options in situations like this should be a little more varied. Is there a way to recover a Fedora 12 installation with bad disk sectors that won't boot properly? For completeness, I am comfortable working with bootable recovery distros-on-CD and such but I don't know which one is likely to work best with modern Fedora. In the absence of guidance, I'm frantically torrenting the Fedora 12 Live CD and DVD, hoping to try rescue mode before tomorrow morning.

    Read the article

  • How to send mail with PHP [migrated]

    - by roth66
    My litle problem is about mail() function in PHP, it doesnt want to send emails, to my local server, or anywhere else. I don't think that function was supposed to send mail to adresses like: [email protected]; So I've installed a mail server: hmailserver, I installed a client: dream-mail; I installed sendmaill.exe; (actually unzipped it in a folder, then in php.ini set the sendmail_path to point to it) After countless trials and errors, it still doesn't work. my system would comprise in an Apache server 2.2, and PHP (last version I think 5.3 or somehing), running on windows. And now for avoiding the usual questions (Did you make rules in your firewall etc etc), I guess I should mention, that there arent any connectivity issues, everything is set to "local" (localhost), port 25, 110, 143, are all opened, And, after a few days of fiddling with my brand new mail-server, I manage to make it work. THe Dream-mail client, has a test, trough which it would test its connections, and according to it, the SMTP AND POP3 connections are all successful, it even sends an email, for testing. SO ya, it would work. The problem, remains: PHP mail funcion. And I really need it, since on my website there's a contact form, and right now is useless. I've also checked the form it self, and seems to be alright.

    Read the article

  • How to set only specific nginx server block into maintenance mode programmatically

    - by Ville Mattila
    I am looking for a solution to automate one of our application's deployment process. In the beginning of deployment, I would like to programmatically set the specified server into maintenance mode and finally after the deployment has been completed, remove the maintenance mode flag from the nginx server. By maintenance mode, I mean that nginx should response with HTTP Response Code 503 to all the requests (with possible custom page). I know how to set the server block to respond with 503 code (see http://www.cyberciti.biz/faq/custom-nginx-maintenance-page-with-http503/) but the question is about how to do this programmatically and most efficiently. Two options have came to my mind: Option 1: At the beginning of the deployment process, write a maintenance file into document root and conditionally check an existence of the maintenance file in nginx server config: server { if (-f $document_root/in_maintenance_mode) { return 503; } } This method contains certain overhead as the file existence is checked for each request. Is it possible to check the file existence only when loading the nginx config? Option 2: Deployment script replaces the whole nginx server configuration file with a maintenance version and swaps it back in the end of the deployment. If this method is used, I am concerned about possible other automation processes like puppet that may be override the maintenance configuration file.

    Read the article

  • Libraries merged folder views

    - by Stigma
    So I pretty much love the Windows 7 Libraries feature, and saw one use for them that I thought would be perfect, but I can't seem to manage it. Basically, a merged view of different folder structures. Suppose I make a new generic library and add three locations to it: C:\Test\, D:\Test\ and D:\temp\Test\. Now, these may look somewhat okay as long as there are no duplicates in these folders. (It wants to group them based on the included directory, which one can work around by looking on google - I don't have the precise trick on hand I am afraid.) But when you get collisions and, say, two of those directories have a Sub directory in them, stuff becomes unusable (assuming Arrange by: Folder view). You'll have multiple folders listed named Sub, which is pretty useless when looking for data. I want folders to get 'merged', which ought to be possible somehow since it can create these merged views based on artist, album etc in other views. So all subdirectories that are double (and recursively checking for doubles inside those, etc) ought to be merged for as far the View is concerned. If files have a collision, I don't really care what happens - hide one, show both, filter out duplicates, whatever. (Although an option would be nice...) Anyhow, is there anyone who knows how to get such a 'merged folder structure' functionality for Libraries? It would be really useful for me.

    Read the article

  • How to diagnose website performance/app pool recycling with Windows 2008/IIS7

    - by ilasno
    Ok, so there are various symptoms here (clients and and our own employees complaining of intermittent slowdowns, getting 'kicked out' to login page or just having a save request not properly save the submitted data). The environment: Windows Server 2008 (Datacenter), Service Pack 2, 64-bit, 2x2.8 GHz processors, 7.5 GB RAM MS SQL Server 2008 (running on the same machine) IIS 7 There are ~10 websites running on the server, each in their own application pool - most of these pools are running in Integrated mode, 2 are in Classic, all are on .NET 2.0 and all run as ApplicationPoolIdentity. I'm trying to analyze, diagnose, and troubleshoot and am struggling with where to get more info about what could be happening. Here are some steps i have already taken: Set each application pool to recycle once per day, and removed any other automatic recycling Set a Virtual Memory Limit for each to 1024000KB (1GB) Enabled ALL 'Generate Recycle Event Log Entry' entries (Config Changes, Isapi Reported Unhealthy, Manual Recycle, Private Memory Limit Exceeded, Regular Time Interval, Request Limit Exceeded, Specific Time, Virtual Memory Limit Exceeded) I have seen the app pool processes recycle (in Task Manager) - a new one will start up, and then the first one dies off - and this has happened without the memory or time going over the settings. This is a fairly new server, and most of these came from Windows Server 2003/IIS6. Any 'next steps' for setting up information gathering, logging, diagnosing, etc. would be much appreciated! j

    Read the article

  • VirtualBox communication from Linux to/from Windows 7

    - by J. Otto Tennant
    VirtualBox is running in Windows 7 as the host. VirtualBox has the two modifications (one is called Guest Additions; don't remember the other). The Virtual machine has "bridged" networking selected. I have SAMBA set up (now, the problem may be here; it has been three or four years since I last did this) on the Linux guest machine. Neither guest nor host sees the other. From the Windows 7 command prompt, the IP address of the Linux guest pings. The IP address of another computer (a separate Windows 7 on the wireless network) pings from the Linux guest. (I have no idea what IP address the Windows 7 host itself has. The output of "netstat" does not seem to be useful.) So, it seem to me that something should be working. The only workgroup on the LAN is inventively named WORKGROUP. SMB4K should be seeing something. There must be a simple setup step that I am missing. (FWIW, there are two processes running smbd, and no process is running nmbd. YaST says that nmbd is set to run. I am not sure what this means.)

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • AD domain on web servers behind NAT - DNS issues?

    - by Ant
    I'm trying to setup an AD domain to manage the security between two Windows Server 2008 webservers that will sooner or later use NLB to balance website requests. I've hit a problem which I think is a simple solution and is down to DNS. My website domain is mydomain.com. The two servers are running behind a NAT firewall on the 10.0.0.0 IP range. I've setup the AD domain to be called ad.mydomain.com (as recommended by MS and a few other answers to questions on here). The second web server however doesn't want to join the domain, and gives an error pinning the problem on DNS - "ensure that the domain name is typed correctly" even though it queries the SRV record successfully and gets the correct DC back - dc.ad.mydomain.com. Doing a dcdiag /test:dns on the DC gives the Delegation error 'DNS Server dc.mydomain.com Missing glue A record'. I have a feeling I need to add something to the public DNS so that it in some way knows about ad.mydomain.com. Can anyone suggest whether I'm on the right track in adding something to the public DNS? Or whether it's something else? Many thanks

    Read the article

  • Facing error: "Could not open a connection to your authentication agent."; trying to add ssh-key.

    - by Kaustubh P
    I use ubuntu server 10.04. ssh-add /foo/cert.pem gave the following output Could not open a connection to your authentication agent. These are my running processes: ps -aux | grep ssh Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html root 1523 0.0 0.0 49260 632 ? Ss Dec25 0:00 /usr/sbin/sshd root 10023 0.0 0.3 141304 6012 ? Ss 12:58 0:00 sshd: padmin [priv] padmin 10117 0.0 0.1 141304 2400 ? S 12:58 0:00 sshd: padmin@pts/1 padmin 11867 0.0 0.0 7628 964 pts/1 S+ 13:06 0:00 grep --color=auto ssh root 31041 0.0 0.3 141264 5884 ? Ss 11:24 0:00 sshd: padmin [priv] padmin 31138 0.0 0.1 141264 2312 ? S 11:25 0:00 sshd: padmin@pts/0 root 31382 0.0 0.3 139240 5844 ? Ss 11:26 0:00 sshd: padmin [priv] padmin 31475 0.0 0.1 139372 2488 ? S 11:27 0:00 sshd: padmin@notty padmin 31476 0.0 0.0 12468 964 ? Ss 11:27 0:00 /usr/lib/openssh/sftp-server These are my environment variables: $ env | grep SSH SSH_CLIENT=192.168.1.13 42626 22 SSH_TTY=/dev/pts/1 SSH_CONNECTION=192.168.1.13 42626 192.168.1.2 22 What is wrong? Why cant I add any identities? Thanks.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >