Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 458/750 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Different graphic cards drivers while booting from external media

    - by goran
    I am booting a certain system of mine with ubuntu 9.10 from external HDD. I am satisfied with the setup and it works fine, however I would like to modify it so that I can choose which graphic card drivers to load during the boot time. Specifically I would like to choose between: nvidia proprietary driver ati proprietary driver generic driver Currently if I am using proprietary drivers then dont boot into X, delete xorg.conf, start gdm and reconfigure the system using jockey (for hardware drivers). What would be the steps to make this (semi-)automatic and avoid restarting X?

    Read the article

  • OpenLdap 2.4 on centos 6 doesn't listen on port 636

    - by Oliver Henriot
    I have an openldap 2.4 server on centos 6 whose confg I copied from those I have running under openldap 2.3 servers on centos 5 machines. On openldap 2.3, specifying TLSCACertificateFile, TLSCertificateFile and TLSCertificateKeyFile with correct values makes the server listen on port 636. This is not the case on the openldap 2.4 setup. I have configured it with loglevel -1 but I have not seen any clue as to what might be wrong and reading the openldap 2.4 manual doesn't indicate if any of the other TLS related parameters are now mandatory. I don't think so though because if I run the service manually, using "# /usr/sbin/slapd -u ldap -h "ldap:/// ldaps:/// ldapi:///"", the server does listen on port 636 and I can query it using "ldapsearch -H ldaps://myserver:636". Is there something I am missing to get the server to listen on port 636 without having to always launch it manually? Is this linked to centos 6 or openldap 2.4? Thank you. Cheers,

    Read the article

  • Redundant APC UPS units, single server set up

    - by Sholom
    Hi All We have a very simple set up, looking for a very simple (reliable) solution: Setup: One Dell box with redundant power supplies running Windows 2003, plugged into two separate APC SmartUPS 1500 units (USB, no smart cards) on two separate circuits. Solution required: IF (UPS1 = Low) AND (UPS2 = Low) THEN: Shutdown gracefully ELSE: DO NOTHING!! APCUPSD only allowes for one instance (and therefore one UPS) in a windows environment. PowerChute can't do this without using APC Smart Cards which means utilizing our switch, but the switch does not have redundant power supplies, so it will only live for as long as one of the two UPS units. And no, i don't have the budget to buy two smart cards pluse a switch with redundancy ;) Thanks

    Read the article

  • Deactivating website in ISPConfig shows another site

    - by Mattias
    A long time ago, one of our clients setup a subdomain pointing to our ip-adress. We added a website (SitesWebsiteAdd new website) that points to one of our servers. The project is now closed and the client wants us to remove the content. When we deactivate (by unclicking active) this site it automatically defaults to another website we have in our list (!?). So, because the client is still pointing to our ip, when entering project.client.com another client project is showing up by default. How is this possible? Any suggestions? I can ofcourse give you guys more details when you tell me what details you need. Thanks

    Read the article

  • Install wireless router with cable modem - need authentication server?

    - by Paul
    I've bought a wireless router which I'm installing with a Telstra BigPond cable modem for a friend. As part of the setup I've got to a screen requesting username / password / authentication server for the cable modem They have contacted Telstra who supply the username / password and say that is all they need. They dont know anything about an authentication server. There are a couple of answers up on Whirlpool forum found through google but those answers are 4 years old. http://forums.whirlpool.net.au/forum-replies-archive.cfm/475258.html http://forums.whirlpool.net.au/forum-replies-archive.cfm/479615.html I havent tried them yet as I hoped to get actual answers before trundling over to my friends house again. Can anyone suggest, How to get information from Telstra support? (I realise this question maybe impossible to answer) What is the authentication server for Telstra BigPond for a user in Sydney Australia Are those whirlpool forum answers still valid? I guess if I dont get anything more here I'll try what it says on whirlpool and see what happens.

    Read the article

  • OS X 10.8.3 + attempt to change VPN settings = no more VPN access

    - by nicole
    I am running Mountain Lion and had gotten very tired of re-entering my password at random times when using my school's VPN network (I don't know much about these, but the type is Cisco IPSec according to the setup instructions I followed a while back). In an attempt to make life easier, I followed the instructions here, but, alas, any attempt to connect with VPN was met with the message "A configuration error has occurred. Verify your settings and try connecting again" (or something along those lines.) I then tried to do the steps in the blog post in reverse and change everything back. Upon (supposedly) doing that, though, a new error message came when attempting to connect to VPN: "The negotiation with the VPN server failed. Verify the server address and try reconnecting." Now I have no idea what to do. Is there a way to reset all VPN-related things in my system so that I can follow my school's instructions and just start over?

    Read the article

  • virtual box upgrade

    - by Husni
    I did upgrade virtualbox from 4.1 to 4.2 wheneverver I want to load my win xp vdi, it gives me the following error: "Kernel driver not installed (rc=-1908) The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing '/etc/init.d/vboxdrv setup' as root. If it is available in your distribution, you should install the DKMS package first. This package keeps track of Linux kernel changes and recompiles the vboxdrv kernel module if necessary." I ran the suggested step to reinstall the kernel module, and the log file files is as follow: Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. I still unable to re-run my win virtual XP vdi file. anyone have a clue?

    Read the article

  • Qnap won't connect to Windows Share

    - by thetrashcan
    I have a qnap nmp-1000 in my network and would like to stream my films from my win7 laptop to the nmap-1000 device. I just managed to do so with upnp sharing, but I would like to, for security reasons, share my files over a shared folder which is password protected. My problem is when I'm searching for devices on my network with the qnap then it won't find any. When I try to mount a remote disk on my qnap it will just fail with an connection failed message. But when I try to connect to the qnap device with my laptop, it does this succesfully. Can someone guide me through on how to get my setup working?

    Read the article

  • Laptop connecting to Wifi but not to internet

    - by eddard stark
    My friends laptop is able to connect to the wifi router . Typing 192.168.1.1 in the browser shows the login page for the router . But he cannot connect to the internet. This is true on both windows and linux (dual booting setup) . There are 3 other laptops connecting to the internet via wifi just fine and his was fine too until this happened all of a sudden . I tried doing a tracert from windows to an external ip . The first hop to the modem is fine but then the packets seem to be getting dropped . If his wifi adapter is damaged how is it connecting to the modem via wifi . I havent asked a question here before but this is really weird . If anyone needs any more information I shall post it here.

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • Backups of Exchange 2007 SP3 using VSS are abnormally large

    - by Stew
    I have recently implemented Veeam backup and recovery 6.0, and have noted when backing up my exchange server via incremental updates, it is transferring way more data than expected. Backup is incremental, and setup to use VSS. VSS is stable and healthy, according to vssadmin. Exchange 2007 SP3 running on Windows Server 2008 R2, just last weekend I installed the latest Rollup for Exchange. I thought the nightly incrementals were large, but perhaps my users really are sending that much mail so I tested taking one incremental backup, waiting 10 minutes and taking a second. The second incremental backup transfered 5.8GB of data. We as an organization are absolutely NOT putting 5.8GB of data on the mail server every 10 minutes. Are there any other veeam users who have seen something similar? Is my test faulted? Are there other considerations for VSS?

    Read the article

  • New-ActiveSyncMailboxPolicy "not implemented" on Exchange 2007 SP3

    - by Flo
    If I try to run: New-ActiveSyncMailboxPolicy Test directly in the Powershell, it asks me if im sure, and if so, it does what it should. But if I try the same from my example Code in C#, then I get an error, saying that "the current host does not implement it". Other Commands like Set-CASMailbox or Get-ActiveSyncMailboxPolicy work just fine, both in the powershell and my application. The Exchange Server/Windows Server 2008R2 and Domain are all setup completely new (test-environment). Is there a way to make this possible?

    Read the article

  • What is the command to use to put your computer to sleep (not hibernate)?

    - by airrick
    I want to put my windows pc (win7) into a sleep state via command line (so i can bind to macro button on keyboard). The power button on the PC is setup to but the computer to sleep (but it's down on the floor and I'm too lazy to reach down) it exactly how I want it (sleeps using hybrid mode in case I loose power) The sleep command on the shutdown menu also works. most info I found says to use; %windir%\system32\rundll32.exe PowrProf.dll, SetSuspendState 0,1,0 But this puts the computer in hibernate mode. I do have hibernate disabled but using hybrid sleep. So, What is the command to use to put your computer to sleep (not hibernate)?

    Read the article

  • Server 2008 R2 Remote Desktop Gateway Role and IIS7

    - by user137466
    I am attempting to setup a RD Gateway for a client. When I first set it up I noticed that IIS did not have the 'Defualt Web Site' so I created it and assigned it an id of 1 and set the bindings to port 80 and 443. I then re installed the RD Gateway role with the idea that it would then configure IIS correctly. It did not. How would I go about making sure a re install of the Remote Desktop Gateway role configures IIS correctly? I cannot re install IIS as there is a site that is already on there that I cannot take down

    Read the article

  • mailman not relaying email to external address

    - by gozzilli
    I have a setup of mailman with postfix on an ubuntu server 12.04. My problem is that mailing list emails are not forwarded to email addresses external to my institution. However the initial welcome email is received by everyone, internally and externally. in fact, a simple email from command line with mail is successfully sent to anyone after that, mailing list emails are only forwarded to internal addresses. the domain name I'm using for the server is not that of my institution who is hosting the server. Here is my main.cf: myorigin = sub.myinstitution.tld mynetworks = 127.0.0.0/8 xxx.xxx.xxx.xxx/16 # this is my institution ip range relayhost = smtp.myinstitution.tld inet_interfaces = loopback-only local_transport = error:local delivery is disabled virtual_alias_maps = hash:/etc/postfix/virtual smtpd_recipient_restrictions = permit_mynetworks myhostname = mywebsite.tld mydestination = $myhostname, localhost.$mydomain, localhost I also found these two links on serverfault and ubuntu forums, but neither of these solutions seem to do the trick for me. Any help would be much appreciated.

    Read the article

  • can not connect via SSH to a remote Postgresql database

    - by tartox
    I am trying to connect via pgAdmin3 GUI to a Postgresql database on a remote server myHost on port 5432. Server side : I have a Unix myUser that match a postgresql role. pg_hba.conf is : local all all trust host all all 127.0.0.1/32 trust Client side : I open an ssh tunnel : ssh -L 3333:myHost:5432 myUser@myHost I connect to the server via pgAdmin3 ( or via psql -h localhost -p 3333 ). I get the following error message : server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. I have tried to access a specific database with the superuser role using psql -h localhost -p 3333 --dbname=myDB --user=mySuperUser with no more success. What did I forget in the setup ? Thank you

    Read the article

  • Apache Subversion and Sudo - Why can't I resolve this hostname?

    - by Hollowsteps
    Okay, I made a mistake and I'll be the first to admit I'm new at this setup. I built a bare bones kit, installed Ubuntu on it, and attempted to set up a source control server for a project some friend and I were going to work on. Unfortunately, I screwed up. I followed a dodgy tutorial from 2005 and when it didn't work, started mixing and matching trying to get to the source of my problem. So now I sit before you, a broken and miserable man. Desperate to escape this annoying echo of 'Unable to resolve host computer.repositoryname.com', I uninstalled apache and subversion. That did not fix it. Next I tried to edit my /etc/hosts, going so far as to remove the reference to '127.0.1.1 computername'. Still I'm plagued. I know I messed up, is there any way to track down this wayward bug?

    Read the article

  • How to connect Android phone to a Wifi network using PPPoE?

    - by Slavo
    I have an ISP at home, which provides me with a PPPoE connection. My router supports that and I've configured it to autoconnect periodically, so I don't have to type my username and password each time. When I connect to the Wireless router from the PC, I have internet and everything works fine. However, when I do so using my Android phone, there's no internet connection on the phone. It connects to the router, but I cannot open any web page. How can I enable internet access from such an ISP on my phone? Is it something in the router setup? The router is Linksys WRT54GL.

    Read the article

  • When to use Nginx PHP Fast CGI with a TCP socket instead of a UNIX socket?

    - by user64204
    I've followed this guide to setup PHP in FastCGI mode with Nginx. This guide describes 2 ways of doing it: TCP socket and UNIX socket. I've ran some Apache Benchmark on my locale machine and here are the results: Below tests ran multiple times to get better average statistics: $ ab -c 200 -n 100000 http://.... APACHE: 1800 req/sec NGINX (TCP socket): 2500 req/sec NGINX (UNIX socket): 15000 req/sec As far as I understand, there is overhead with using a TCP socket rather than a UNIX socket, hence the better performance with the latter. However I was not expecting such a performance difference given that the TCP socket is on the localhost, and therefore would like to ask the following question: Q: Given the huge performance gain with using a UNIX socket, what are the configuration scenarios where it would make sense to use a TCP socket instead?

    Read the article

  • Lingering database-connections from Feng Office

    - by Bobby
    I've installed Feng Office on our main server which is working perfectly so far. Unfortunately it seems like there's a problem with the connection to the MySQL-Database. While the connection itself works fine, it's the reuse/pooling of connections which seems to be bugged. There are lingering/sleeping connections to the server from Feng Office which won't close and don't get reused after some time (120 seconds). Of course those lingering processes/connections are piling up pretty fast. I've found a thread at the forums about this behavior, but the suggested fix is already applied (by default). I'm sure this is just a configuration issue, but I'm a little clue less because Feng is besides a MediaWiki, a DokuWiki and homebrewed PHP applications the only one with this issue. The setup is a Microsoft Windows 2003 Server with MySQL 5.0.26 and Apache 2.2. Where can I start looking for clues why this is happening and how do I get rid of lingering MySQL-Connections?

    Read the article

  • Creating a custom NAS compatible with the Mac Time machine and for media streaming

    - by Bobby Alexander
    I am planning to assemble a custom NAS machine using an Intel Atom processor. I need the NAS for the following purposes: It should be accessible from by Windows PC so that I can dump data on the NAS (installations, media etc) It should be accessible from my Macbook for the above use. I should be able to use it with the Mac time machine software for backup. The media should be available to my PS3 for streaming. I should be able to access it from my iphone. All the above features should be available over wireless. The time machine feature is very important. Is this even possible? Can someone provide resources on how I can assemble such a machine and setup the required software on it? Much appreciated.

    Read the article

  • Copy-and-Pasted Test Code: How Bad is This?

    - by joshin4colours
    My current job is mostly writing GUI test code for various applications that we work on. However, I find that I tend to copy and paste a lot of code within tests. The reason for this is that the areas I'm testing tend to be similar enough to need repetition but not quite similar enough to encapsulate code into methods or objects. I find that when I try to use classes or methods more extensively, tests become more cumbersome to maintain and sometimes outright difficult to write in the first place. Instead, I usually copy a big chunk of test code from one section and paste it to another, and make any minor changes I need. I don't use more structured ways of coding, such as using more OO-principles or functions. Do other coders feel this way when writing test code? Obviously I want to follow DRY and YAGNI principles, but I find that test code (automated test code for GUI testing anyway) can make these principles tough to follow. Or do I just need more coding practice and a better overall system of doing things? EDIT: The tool I'm using is SilkTest, which is in a proprietary language called 4Test. As well, these tests are mostly for Windows desktop applications, but I also have tested web apps using this setup as well.

    Read the article

  • How can I create a pen drive that I can boot from, and then install Win98 from?

    - by rossmcm
    I have a HP Compaq t5515 thin client computer with a flash disk and USB port. I want to put Win98 onto it, replacing whatever is on there now (I think it is some Linux-based thing). I can find stuff about putting Win98 onto a pen drive and running from that, but I can't find any info about installing Windows 98 from a pen drive onto a sep[arate system. Is it just a matter of making the pen drive bootable to DOS copying the contents of a Win98 installation CD onto the pen drive booting the HP machine from the pen drive running SETUP.EXE from the pen drive? Any pointers appreciated. TIA

    Read the article

  • Disk quota problem in Windows Server SBS 2003

    - by deddebme
    I have got a new job and the existing SBS 2003 domain setup is unsecure (i.e. everyone is a domain admin etc etc). There are lots of problem due to inexperienced "network admin", and I am trying to fix them one by one. There exist one issue which I found quite weird, that the "Quota" tab exists in the C:(NTFS) drive but not the D:(NTFS) drive. I played around with gpedit to enable disk quota (it was "not configured" before), but still I can't see that tab. Have you seen this problem before? How did you solve it?

    Read the article

  • Releasing software/Using Continuous Integration - What do most companies seem to use?

    - by Sagar
    I've set up our continuous integration system, and it has been working for about a year now. We have finally reached a point where we want to do releases using the same. Before our CI system, the process(es) that was used was: (Develop) -> Ready for release -> Create a branch -> (Build -> Fix bugs as QA finds them) Loop -> Final build -> Tag (Develop) -> Ready for release -> (build -> fix bugs) Loop -> Tag Our CI setup: 1 server for development (DEV) 1 server for qa/release (QA) The second one has integrated into CI perfectly. I create a branch when the software is ready for release, and the branch never changes thereafter, which means the build is reproduceable without having to change the CI job. Any future development takes place on HEAD, and even maintainence releases get a completely new branch and a completely new job, which remains on the CI system forever, and then some. The first method is harder to adapt. If the branch changes, the build is not reproduceable unless I use the tag to build [jobs on the CI server uses the branch for QA/RELEASE, and HEAD for development builds]. However, if I use the tag to build, I have to create a new CI job to build from the tag (lose changelog on server), or change the existing job (lose original job configuration). I know this sounds complicated, and if required, I will rewrite/edit to explain the situation better. However, my question: [If at all] what process does your company use to release software using continuous integration systems. Is it even done using the CI system, or manually?

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >