Search Results

Search found 87589 results on 3504 pages for 'veritas cluster server'.

Page 301/3504 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • NFS server Windows 2008 - mounting via linux - input/output error help.

    - by pablo
    I want to try mounting a folder with NFS - I set up the NFS sharing on a windows 2008 R2 server, specified hosts in the NFS permissions (by IP address) and mounted via /etc/fstab it mounts, but when I try to list the folder, I get 'input/output error' the owner/group on the local mount point look weird too? drwx------ 2 4294967294 4294967294 4096 2011-02-10 19:15 data/ I mounted in /etc/fstab as: 10.0.6.55:/share$ /media/data nfs soft,intr,rsize=8192,wsize=8192 What am I doing wrong?

    Read the article

  • Error connecting to the application.

    - by ahmed
    Hi guys, let me explain you the current scenario. We have a asp.net application on framework 2 running on intranet(Windows 2003 Server and Sql Server 2000). Now we have a xp machine where we installed and configured IIS with a virtual directory pointing on the local Xp machine and this machine is connected to our intranet. We have copied the same application files of the server to this XP machine. But the thing is the connection string/database of the application is pointing towards the intranet server. The problem is when we try to run the application on the XP machine we get this error : An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Is this query related or concerned with this site or stackoverflow ?

    Read the article

  • Strange network traffic on my dedicated server. How to find out what is causing it

    - by valter
    I noticed a trange outgoing traffic in my dedicated server. I'd like to know how could I find out what is the reason to this. One day traffic One week traffic As you can see, there are bursts of spikes to over 20 megs of outgoing traffic. There is always a lower of outgoing data, before the burst. Is there any command I could use to know with application is causing this? Sorry I couldn't express using the best form.

    Read the article

  • Set up a "relay" service

    - by trikks
    Hi! I'm trying to create a 'left client' server < 'right client' setup but need some advice and tips. Let's say that I have a server-daemon on the left side, like a vnc-server that connects to the middle server. On the right side I've got a client that want to connect to the left server-daemon, but it has to be done through the middle server. I assume this should be done with some tunneling service. The server environment is a fully featured Debian Linux or Mac OS X Server host. Any idéas? Thanks / Trikks

    Read the article

  • Windows Server 2003 Is there a limit on number of TCP connections per process?

    - by aceinthehole
    We are running into issues with BizTalk host instance intermittently going down. One of the things that we are worried about is the number of FTP connections a single host instance is making which could easily reach into the hundreds perhaps sometimes thousands, depending on traffic. My question is Windows Server 2003 Is there a limit on number of TCP connections per process? If so would putting each application in it's own host instance potentially solve the problem.

    Read the article

  • How can I retrieve statistics from my ghost cast server?

    - by Foxtrot
    I have a GhostCast server running for deploying images. I would like to have each ghost cast session to write to a file ( can be multiple text files or append to one file already there ) statistics. I know this is possible based on the options GhostCast software provides for writing to a log file, but I would like this automated for every image being backed up and restored. I don't want to have my employees click write to a new file every time. Is this possible?

    Read the article

  • Scanning to network share

    - by tking
    In our environment we have several printers with the scan to folder functionality set up. It worked pretty well. Long story short- I needed to reboot our file server one evening (the server whose $hares where all the scans go), upon on boot up I received a message stating the name of the file server (i.e AcmeFS1; Windows Server 2003) could not be used because there was a duplicate name on the network. I found that somehow another printer on the network was using the name of the file server. Weird - I know. People could no longer scan to their folders. I ran an nbtstat on the file server and found that the local netbios name table was empty. I renamed the offending printer and rebooted the file server once more. This time, no error, and once I ran nbtstat again, I found that correct name and domain in the local netbios name table. Problem is, scan to folder is still not working. I know this was working before I rebooted the file server the first time. Anyone have any idea what is going on and how to fix? Thanks. FIXED: Not sure why the reboot didnt fix it but I restarted the "Server" service on the file server and the problem went away.

    Read the article

  • Should I install GCC 64 bit on a new server?

    - by diederikvanliere
    I am going to rebuild my server and I am wondering whether I should install GCC 32 or 64 bit. I develop in Python and I use some libraries that would benefit from a 64 bit GCC installation but I am not sure if I am going to run into problems with other programs / libraries. What are your thoughts?

    Read the article

  • Using cssh with an ssh clone

    - by decimus phostle
    Just had a quick question about using ClusterSSH(cssh) with a homegrown ssh-like/ssh-clone application which we use internally to connect to remote machines. I tried using the ssh = /path/to/ssh_clone override in $HOME/.csshrc or /etc/csshrc (or even passing it via -C). Unfortunately running cssh with debug shows that it is still trying to connect to the remote machines as ssh -l user machine_ip instead of ssh_clone -l user machine_ip. Any thoughts/suggestions on how to work around this would be greatly appreciated. TIA. Edit: Version info $ cssh -v Version: 4.01_02

    Read the article

  • Two way replication

    - by Nidzaaaa
    I have a little problem... I have this case: -2 server instances -2 Databases -1 Table (5 columns) From server 1 I created publication to replicate all columns of table I have in 1. DB From server 2 I created subscription to pull all columns from table which is in server 1 DB But now, I need to publicate one columns of same table from server 2 to server 1 and also it has to be in same DB... I tried with using logic and creating publication for server 2 and subscription on server 1 but there is error appearing "You have selected the Publisher as a Subscriber and entered a subscription database that is the same as the publishing database. Select another subscription database." I hope someone understood my problem and have an answer for me, thanks in advance... p.s. Ask for more info if you need ...

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • SQL Authority News – Presenting at SQL Bangalore on May 3, 2014 – Performing an Effective Presentation

    - by Pinal Dave
    SQL Bangalore is a wonderful community and we always have a great response when we present on technology. It is SQL User Group and we discuss everything SQL there. This month we have SQL Server 2014 theme and we are going to have a community launch on this subject. We have the best of the best speakers presenting on SQL Server 2014 technology. Looking at the whole line of celebrity speakers, I have decided not to present on SQL Server. I will be presenting on the performance tuning subject, but with the twist of soft skills. I will be presenting on “Performing an Effective Presentation“. Trust me, you do not want to miss this presentation, I will be presenting on how to present effectively when presenting SQL Server topics. What this session will NOT have I personally believe that we all are good presenters most of the time. We can all easily call out if someone is bad presenter. There is no point talking about basics like bigger bullet points, talk loudly, talk with confidence, use better analogies etc. In simple words – this is not going to some philosophy session and boring notes. What this session will have Well, this session will tell stories of my life. It will tell how we can present about technology and SQL Server with the help of stories and personal experience. I am going to tell stories about two legends  who have inspired me. Right after that we will be doing two exercises together where we will learn quickly and effectively, how to become better speaker – instantly! There is no video recording of this session. If you want to get resources from this session, please sign up my newsletter at http://bit.ly/sqllearn Here are few of the slides from this presentation: Here is the details about the event and location Venue:Microsoft Corporation, Signature Building,Embassy Golf Links Business Park, Intermediate Ring Road, Domlur, Bangalore – 560071 The agenda is amazing – we have top line SQL Speakers. Everyone is welcome and don’t forget to get your friend along for this event. Loads to learn and tons to share !!! Keynote (20 mins) by Anupam Tiwari – Business Program Manager – GTSC Backup Enhancements with SQL Server 2014 by Amit Banerjee – PFE Microsoft Performance Enhancements with SQL Server 2014 by Sourabh Agarwal - PFE Microsoft LUNCH BREAK Performing an effective Presentation by Pinal Dave – Community Member (SQLAuthority.com) InMemory Enhancements with SQL Server 2014 by Balmukund Lakhani – Support Escalation Engg. Microsoft Some more lesser known enhancements with SQL Server 2014 by Vinod Kumar – Technical Architect Microsoft MTC Power Packed – Power BI with SQL Server by Kane Conway – Support Escalation Engg. Microsoft I am very big fan of Amit, Balmukund and Vinod – I have always watched their session and this time, I am going to once again attend their session without missing a single min. They are SQL legends, I am going to be there and learn when they are sharing their knowledge.  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL

    Read the article

  • Random DNS Client Issue with BIND9/Windows Server 2003 DNS

    - by upkels
    Within our office, we have a local server running DNS, for internal related "domains", (e.g. .internal, .office, .lan, .vpn, etc.). Randomly, only the hosts configured with those extensions will stop resolving on the Windows-based workstations. Sometimes it'll work for a couple weeks without issue on one machine, then suddenly stop working, or it'll happen on another 15 times per day. It's completely random for all workstations. When troubleshooting, I have opened up a command prompt, and issued various nslookup commands for some of these hosts, and they resolve, however I've been told that nslookup uses different "libraries" for name resolution than other applications such as web browsers, email clients, etc. The only solution thus far, is manually restarting the Windows DNS Client on each workstation when this happens. Issuing the ipconfig /flushdns command multiple times helps every now and then, but is not successful enough to even attempt before restarting the DNS Client. I have tried two different DNS servers; BIND9, and Windows Server 2003 R2 DNS, and the behavior is the same. We have a single Netgear JGS524 switch all workstations and servers are connected to within the office, and a Linksys SR224G switch in another department with workstations attached.

    Read the article

  • Windows server 2003 default administrator password

    - by Jason Baker
    Sorry if this is an overly simplistic question, but I'm a bit stuck here. :) I need a windows machine for me to do some programming for class. Since I have my Macbook with me everywhere I go, I figured that it would be easiest to install a vm. And since I can get a copy of Windows server 2k3 for free via dreamspark, I thought I'd try to do that. Here's what happened though: I installed windows server (disk one). When the system booted up, vmware automatically installed VMWare tools and prompted me to restart. There was also a prompt to start the installation of disc 2, but I figured it would be better to restart before doing that. When the machine came back up, I was prompted to log in as the administrator. The problem is that I wasn't prompted to make an administrator account or password. Is there a default password I can use? I've tried all the obvious ones (blank, password, etc) and googling, but I didn't come up with anything.

    Read the article

  • Deploying an MVC 2.0 .Net 4.0 application to Windows Server 2003

    - by Brett Rigby
    I've spent the past couple of nights on this, and have come to the conclusions that I'm doing something wrong! Basically, I have my .Net 4.0 MVC 2.0 application deployed to my 2003 Server and it's constantly giving me the following error: The value for the 'compilerVersion' attribute in the provider options must be 'v4.0' or later if you are compiling for version 4.0 or later of the .NET Framework. To compile this Web application for version 3.5 or earlier of the .NET Framework, remove the 'targetFramework' attribute from the element of the Web.config file. I also have the .Net 3.5 MVC 1.0 application also deployed to the same server (from before the upgrade to .Net 4.0), and it works absolutely fine. I have re-run through Phil Haack's post on IIS6, making the necessary changes for .Net 4.0, but no luck. I have been through the steps in the British Developer's post too, but no joy. I have also tried the steps outlined in Johan's blog post, but the settings he mentions were already enabled. Obviously, I can't simply upgrade to a 2008 box, nor do I want to downgrade my software back to < 4.0 - any other ideas?

    Read the article

  • VB.Net plugin using Matlab COM Automation Server...Error: 'Could not load Interop.MLApp'

    - by Ben
    My Problem: I am using Matlab COM Automation Server to call and execute matlab .m files from a VB.Net plugin for a CAD program called Rhino 3D. The code works flawlessly when set up as a simple Windows Application in Visual Studio, but when I insert it (and make the requisite reference) into my .Net plugin and test it in the CAD program I get the following error: "Could not load file or assembly 'Interop.MLApp, Version 1.0.0.0, culture=neutral, PublicKeyToken=null' or one of its dependencies. the system cannot find the file specified." What I've Tried: I am baffled as to why this occurs, but I was able to contact the CAD program's technical support staff and they suggested that it has something to do with their DotNet SDK having trouble with references that are located far outside the CAD program directory. They didn't have any solutions so I tried playing around with copylocal and this made no difference. I tried using other COM libraries and the Open Office automation server works fine, although uses url's instead of requiring a reference. I also tested Excel, which does require a reference, and it returned the error: "retrieving the COM class factory for component with CLSID {...} failed due to the following error: 80040154." This may or may not be related to the issue with the Matlab COM reference, but I thought was worthwhile to share. Perhaps is there another way to reference Interop.MLApp? I would appreciate any suggestions or thoughts on how I might make the Matlab Interop.MLApp reference work. Best regards, Ben

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • Nature of Lock is child table while deletion(sql server)

    - by Mubashar Ahmad
    Dear Devs From couple of days i am thinking of a following scenario Consider I have 2 tables with parent child relationship of kind one-to-many. On removal of parent row i have to delete the rows in child those are related to parents. simple right? i have to make a transaction scope to do above operation i can do this as following; (its psuedo code but i am doing this in c# code using odbc connection and database is sql server) begin transaction(read committed) Read all child where child.fk = p1 foreach(child) delete child where child.pk = cx delete parent where parent.pk = p1 commit trans OR begin transaction(read committed) delete all child where child.fk = p1 delete parent where parent.pk = p1 commit trans Now there are couple of questions in my mind Which one of above is better to use specially considering a scenario of real time system where thousands of other operations(select/update/delete/insert) are being performed within a span of seconds. does it ensure that no new child with child.fk = p1 will be added until transaction completes? If yes for 2nd question then how it ensures? do it take the table level locks or what. Is there any kind of Index locking supported by sql server if yes what it does and how it can be used. Regards Mubashar

    Read the article

  • What is a good embeddable Java LDAP server?

    - by LeedsSideStreets
    I'm working on a Java web application that integrates with a few other external applications that are deployed along with it. Authentication information must be synchronized across everything and the other applications want to authenticate against LDAP. The application will be deployed in environments where there will be no other LDAP server for everything to use; I have to provide it. My solution so far has been to use Penrose Server as a standalone app, which I set up to examine tables in the main application's database and publish LDAP based on that. It works well, but it would be nice to have something that can be embedded into the main application itself to simplify deployment. It looks like Penrose can be embedded, but the documentation can be a bit spotty or out-of-date (though it seems to be actively developed). It could be an acceptable solution, but if there is another out there that is known to work well in an embedded configuration I might want to check it out. I'm also concerned about GPL issues with Penrose. I'm not at liberty to GPL the source code for the application. I don't believe it was an issue running it standalone, but embedding it may be no-no... anybody know for sure? A permissive license would be good in order to avoid these issues. Requirements: LDAP v3 Must be able to be have the directory contents updated while running, either programmatically or by another means like syncing with the database as Penrose does Easy to configure (no additional configuration for the app at deployment time would be ideal) So far I've briefly looked at ApacheDS and OpenDS which seem to be embeddable. Does anyone have experience with this kind of thing?

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >