Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 167/344 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Unicorn_init.sh cannot find app root on capistrano cold deploy

    - by oFca
    I am deploying Rails app and upon running cap deploy:cold I get the error saying * 2012-11-02 23:53:26 executing `deploy:migrate' * executing "cd /home/mr_deployer/apps/prjct_mngr/releases/20121102225224 && bundle exec rake RAILS_ENV=production db:migrate" servers: ["xxxxxxxxxx"] [xxxxxxxxxx] executing command command finished in 7464ms * 2012-11-02 23:53:34 executing `deploy:start' * executing "/etc/init.d/unicorn_prjct_mngr start" servers: ["xxxxxxxxxx"] [xxxxxxxxxx] executing command ** [out :: xxxxxxxxxx] /etc/init.d/unicorn_prjct_mngr: 33: cd: can't cd to /home/mr_deployer/apps/prjct_mngr/current; command finished in 694ms failed: "rvm_path=$HOME/.rvm/ $HOME/.rvm/bin/rvm-shell '1.9.3-p125@prjct_mngr' -c '/etc/init.d/unicorn_prjct_mngr start'" on xxxxxxxxxx but my app root is there! Why can't it find it? Here's part of my unicorn_init.sh file : 1 #!/bin/sh 2 set -e 3 # Example init script, this can be used with nginx, too, 4 # since nginx and unicorn accept the same signals 5 6 # Feel free to change any of the following variables for your app: 7 TIMEOUT=${TIMEOUT-60} 8 APP_ROOT=/home/mr_deployer/apps/prjct_mngr/current 9 PID=$APP_ROOT/tmp/pids/unicorn.pid 10 CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb - E production" 11 # INIT_CONF=$APP_ROOT/config/init.conf 12 AS_USER=mr_deployer 13 action="$1" 14 set -u 15 16 # test -f "$INIT_CONF" && . $INIT_CONF 17 18 old_pid="$PID.oldbin" 19 20 cd $APP_ROOT || exit 1 21 22 sig () { 23 test -s "$PID" && kill -$1 `cat $PID` 24 } 25 26 oldsig () { 27 test -s $old_pid && kill -$1 `cat $old_pid` 28 } 29 case $action in 30 31 start) 32 sig 0 && echo >&2 "Already running" && exit 0 33 $CMD 34 ;; 35 36 stop) 37 sig QUIT && exit 0 38 echo >&2 "Not running" 39 ;; 40 41 force-stop) 42 sig TERM && exit 0 43 echo >&2 "Not running" 44 ;; 45 46 restart|reload) 47 sig HUP && echo reloaded OK && exit 0 48 echo >&2 "Couldn't reload, starting '$CMD' instead" 49 $CMD 50 ;; 51 52 upgrade) 53 if sig USR2 && sleep 2 && sig 0 && oldsig QUIT 54 then 55 n=$TIMEOUT 56 while test -s $old_pid && test $n -ge 0 57 do 58 printf '.' && sleep 1 && n=$(( $n - 1 )) 59 done 60 echo 61 62 if test $n -lt 0 && test -s $old_pid 63 then 64 echo >&2 "$old_pid still exists after $TIMEOUT seconds" 65 exit 1 66 fi 67 exit 0 68 fi 69 echo >&2 "Couldn't upgrade, starting '$CMD' instead" 70 $CMD 71 ;; 72 73 reopen-logs) 74 sig USR1 75 ;; 76 77 *) 78 echo >&2 "Usage: $0 <start|stop|restart|upgrade|force-stop|reopen-logs>" 79 exit 1 80 ;; 81 esac

    Read the article

  • Provisioning SIP Phones over the internet

    - by Jorge Fernandez
    I have a few SIP Phones that are located of site and connect to my PBX over the internet to make calls. For some reason one of these phones has become unprovisioned. In my office phones get provisioned by the server via TFTP. The ones that I have off site I pre-provisioned manually before I sent them off-site (I'm in Florida the phone is in New Jersey). Whats the best way to provision these over the internet? TFTP is very insecure. Sending the plain text profiles with the SIP Account and Password over the internet is out of the question. The phones have been off-site for about 6 months without any issues. Im using Trixbox and Cisco 7940 Phones.

    Read the article

  • Filter sub-categories like in layered navigation

    - by russjman
    I created a new template file catalog/category/list.phtml. This is to display all sub categories of the current category. I have layered navigation which is displaying sub-categories as one of the filters, but I want this new template to work with these filters as well. Right now when i click the subcategory filter, it filters all products on the page, but still displays all categories of the parent category. $_filters is how i am trying to access these filters, but i get nothing. Is there something i am not initializing correctly to have access to these filters from the layered navigation. <?php $_helper = $this->helper('catalog/output'); $_filters = $this->getActiveFilters(); echo $_filters; if (!Mage::registry('current_category')) return ?> <?php $_categories=$this->getCurrentChildCategories() ?> <?php $_count = is_array($_categories)?count($_categories):$_categories->count(); ?> <?php if($_count): ?> <?php foreach ($_categories as $_category): ?> <?php if($_category->getIsActive()): ?> <?php $cur_category=Mage::getModel('catalog/category')->load($_category->getId()); $layer = Mage::getSingleton('catalog/layer'); $layer->setCurrentCategory($cur_category); $_imgHtml = ''; if ($_imgUrl = $this->getCurrentCategory()->getImageUrl()) { $_imgHtml = '<img src="'.$_imgUrl.'" alt="'.$this->htmlEscape($_category->getName()).'" title="'.$this->htmlEscape($_category->getName()).'" class="category-image" />'; $_imgHtml = $_helper->categoryAttribute($_category, $_imgHtml, 'image'); } echo $_category->getImageUrl(); ?> <div class="category-image-box"> <div class="category-description clearfix" > <div class="category-description-textbox" > <h2><span><?php echo $this->htmlEscape($_category->getName()) ?></span></h2> <p><?php echo $this->getCurrentCategory()->getDescription() ?></p> </div> <a href="<?php echo $this->getCategoryUrl($_category) ?>" class="collection-link<?php if ($this->isCategoryActive($_category)): ?> active<?php endif ?>" >See Entire Collection</a> <a href="<?php echo $this->getCategoryUrl($_category) ?>"><?php if($_imgUrl): ?><?php echo $_imgHtml ?><?php else: ?><img src="/store/skin/frontend/default/patio_theme/images/category-photo.jpg" class="category-image" alt="collection" /><?php endif; ?></a> </div> <?php echo '<pre>'.print_r($_category->getData()).'</pre>';?> </div> <?php endif; ?> <?php endforeach ?> <?php endif; ?>

    Read the article

  • rsync - How to exclude one .htaccess but not all of them

    - by Cory Gagliardi
    I have an rsync command for copying my files from dev to production. I don't want to copy the .htaccess file that's in the root of the HTML directory but, I do want to copy the few .htaccess files that are in its sub directories. I'm using the argument --exclude .htaccess which is stopping all of the files from getting copied. The other arguments I'm including are -a --recursive --times --perms. Is it possible to configure rsync to do this? Edit: Here is my full command: rsync -a --recursive --times --perms \ --exclude prop_images --exclude tracking --exclude vtours \ --exclude .htaccess --exclude .htaccess_backup --exclude "*~" \ /home/user/dev_html/* /home/user/public_html/

    Read the article

  • How can I see logs in a server after a kernel panic hang?

    - by Low Kian Seong
    I am running a production gentoo Linux machine, and recently there was a situation where the server hung in my co-located premises and when I got there I noticed that the server was hung on what appeared to be a kernel panic hang. I rebooted the machine with a hard reboot and was disappointed to find out that I could not find a shred of evidence anywhere on why the machine hung. Is it true that when I do a hard reboot the messages itself will get lost or is there a setting I can do somewhere say in syslog-ng or maybe in sysctl to at least preserve the error log so that I can prevent such mishaps from happening in the future ? I am running a 2.6.x kernel by the way. Thanks in advance.

    Read the article

  • AXFR problem using gradwell secondary DNS

    - by Roaders
    Hi All I use Gradwell.com to provide secondary DNS but I keep getting e-mails along the lines of the following saying that it's not working: You have asked us to provide a secondary DNS service for the following domain(s) Unfortunately, the primary DNS server(s) you specified are not permitting the necessary zone transfers from our servers, or they are not answering "SOA" queries for your domain correctly. I have gone through the support procedure and they weren't that helpful. They have suggested the following: Our secondline team have suggested setting the AXFR to use anouther machine. This will ensure that the transfer is not locked down to one machine and should allow any machine to make the request I don't really know what AFXR is and I only have 1 production machine so I can't set the AFXR to use another one! In previous support correspondence we confirmed that I am allowing transfers to the correct IP and that I have the correct ports open on the firewall. I am running Windows Server 2003. What can I do to try and get these zone transfers working? Thanks

    Read the article

  • Can I change a MySQL table back and forth between InnoDB and MyISAM without any problems?

    - by Daniel Magliola
    I have a site with a decently big database, 3Gb in size, a couple of tables with a dozen million records. It's currently 100% on MyISAM, and I have the feeling that the server is going slower than it should because of too much locking, so I'd like to try going to InnoDB and see if that makes things better. However, I need to do that directly in production, because obviously without load this doesn't make any difference. However, I'm a bit worried about this, because InnoDB actually has potential to be slower, so the question is: If I convert all tables to InnoDB and it turns out i'm worse off than before, can I go back to MyISAM without losing anything? Can you think of any problems I might encounter? (For example, I know that InnoDB stores all data in ONE big file that only gets bigger, can this be a problem?) Thank you very much Daniel

    Read the article

  • Can I install Windows OS (Windows 7) on a removable USB hard drive?

    - by Hemant
    I wanted to take a sneak peak at Windows 7 so I thought of installing it. I have Windows Vista on my laptop which came pre-installed with it. I didnt want to mess with it. So I created a partition (20 GB) in my USB external hard disk and tried to install Windows 7 on that partition. But when I booted from Windows 7 DVD and selected the target partition on USB hard disk, it said it cannot be installed. Is there any way to install windows on external USB hard disk?

    Read the article

  • HOWTO Catch/Redirect all outgoing e-mails from webapp on Windows Server 2003

    - by John
    As suggested by another member, I have split the original post into two. To see the original post, go to http://serverfault.com/questions/134595/howto-catch-redirect-all-outgoing-e-mails-on-win2k-and-redhat-enterprise. For this question, please keep your answers specific to Windows Server 2003 only. Thanks for the help in advance. Background: I am integrating two separate web application that are developed in ASP .NET and JSP/Struts. As such, they are hosted on two different server technologies, namely Win2K3 and Redhat Enterprise Server 5.5. Problem: There is a copy of production data in my test environment with real e-mail addresses. I need to test the e-mail functionality of these applications, but I do not want them to send out actual e-mails. Is there a way to catch and redirect all outgoing e-mails? Ideally, I would like to send all outgoing e-mails to another e-mail (i.e., [email protected]) so my testers can look at them.

    Read the article

  • Benefits of log rotation

    - by Manfred Moser
    I have been using logrotation for years and never thought too much of it being a problem until I came across a question on stackoverflow (http://stackoverflow.com/questions/1508734/disable-java-log-rotation/) where someone wants to disable log rotation. To me with experience in having build server and even production servers cleaned up manually because logs are not rotated and discs are running out and suddenly machines come to a halt that all seems crazy, but it occurred to me that maybe it is not so obvious after all. So what are the benefits of log rotation? And what are the drawbacks (e.g. more difficult to debug/analyze maybe)? What tools do you find useful for working with rotated log files? Splunk I assume, but what else?

    Read the article

  • How to check use of userva boot option on Win 2K3 server

    - by Tim Sylvester
    I have some 32-bit Win2K3 servers running an application that fails now and then apparently due to heap fragmentation. (Process virtual bytes grows, private bytes does not) I do not have access to the source code or build process of this application. I have modified the boot.ini file on one of these servers to include /userva=2560, half way between the normal mode of operation and the /3GB option. Normally it takes weeks to reach the point of failure, but I'd like to see right away whether this has actually had any effect. As I understand it, this option limits the kernel to the remaining address space (1536MB instead of 2048), but does not necessarily give an application the extra address space, depending on the flags in the application's PE header. How can I determine whether the O/S is allowing a particular application, running in production, to access address space above 2GB? Additionally, what's the best way to monitor the system to ensure that the kernel is not starved for address space, and more generally how should I go about finding the optimal value for this setting?

    Read the article

  • NT4 server generate too much weird DNS queries, How can i see the source PID?

    - by Hanan N.
    I have a NT4 server that in the last two weeks started to generate too many weird DNS queries to the DNS server is set to use. I have got warnings from the IPS system that it has blocked the responses from the DNS server back to the NT4 server. The queries it generate doesn't relate to any computer in the network, it is like 120624100088.xxxxxxx.net where xxx is the internal network, the numbers are just random at each query. I have done some research on how to get the PID that is generating the queries, and i found that only Process Monitor could give me that information, but since it is NT4 system Process Monitor doesn't work on it. It is a production server and i am just can't stop services as i want. I would like to get your advice on how can i get the PID that is generating these queries? Thanks.

    Read the article

  • How do I programmatically send email w/attachment to a known recipient using MAPI in C++? MAPISendM

    - by Tim
    This question is similar, but does not show how to add a recipient. How do I do both? We'd like the widest support possible for as many Windows platforms as possible (from XP and greater) We're using visual studio 2008 Essentially we want to send an email with: pre-filled destination address file attachment subject line from our program and give the user the ability to add any information or cancel it. EDIT I am trying to use MAPISendMail() I copied much of the code from the questions linked near the top, but I get no email dlg box and the error return I get from the call is: 0x000f - "The system cannot find the drive specified" If I comment out the lines to set the recipient, it works fine (of course then I have no recipient pre-filled in) Here is the code: #include <tchar.h> #include <windows.h> #include <mapi.h> #include <mapix.h> int _tmain( int argc, wchar_t *argv[] ) { HMODULE hMapiModule = LoadLibrary( _T( "mapi32.dll" ) ); if ( hMapiModule != NULL ) { LPMAPIINITIALIZE lpfnMAPIInitialize = NULL; LPMAPIUNINITIALIZE lpfnMAPIUninitialize = NULL; LPMAPILOGONEX lpfnMAPILogonEx = NULL; LPMAPISENDDOCUMENTS lpfnMAPISendDocuments = NULL; LPMAPISESSION lplhSession = NULL; LPMAPISENDMAIL lpfnMAPISendMail = NULL; lpfnMAPIInitialize = (LPMAPIINITIALIZE)GetProcAddress( hMapiModule, "MAPIInitialize" ); lpfnMAPIUninitialize = (LPMAPIUNINITIALIZE)GetProcAddress( hMapiModule, "MAPIUninitialize" ); lpfnMAPILogonEx = (LPMAPILOGONEX)GetProcAddress( hMapiModule, "MAPILogonEx" ); lpfnMAPISendDocuments = (LPMAPISENDDOCUMENTS)GetProcAddress( hMapiModule, "MAPISendDocuments" ); lpfnMAPISendMail = (LPMAPISENDMAIL)GetProcAddress( hMapiModule, "MAPISendMail" ); if ( lpfnMAPIInitialize && lpfnMAPIUninitialize && lpfnMAPILogonEx && lpfnMAPISendDocuments ) { HRESULT hr = (*lpfnMAPIInitialize)( NULL ); if ( SUCCEEDED( hr ) ) { hr = (*lpfnMAPILogonEx)( 0, NULL, NULL, MAPI_EXTENDED | MAPI_USE_DEFAULT, &lplhSession ); if ( SUCCEEDED( hr ) ) { // this opens the email client // create the msg. We need to add recipients AND subject AND the dmp file // file attachment MapiFileDesc filedesc; ::ZeroMemory(&filedesc, sizeof(filedesc)); filedesc.nPosition = (ULONG)-1; filedesc.lpszPathName = "E:\\Development\\Open\\testmail\\testmail.cpp"; // recipient(s) MapiRecipDesc recip; ::ZeroMemory(&recip, sizeof(recip)); recip.lpszName = "QA email"; recip.lpszAddress = "[email protected]"; // the message MapiMessage msg; ::ZeroMemory(&msg, sizeof(msg)); msg.lpszSubject = "Test"; msg.nRecipCount = 1; // if I comment out this line it works fine... msg.lpRecips = &recip; msg.nFileCount = 1; msg.lpFiles = &filedesc; hr = (*lpfnMAPISendMail)(0, NULL, &msg, MAPI_LOGON_UI|MAPI_DIALOG, 0); if ( SUCCEEDED( hr ) ) { hr = lplhSession->Logoff( 0, 0, 0 ); hr = lplhSession->Release(); lplhSession = NULL; } } } (*lpfnMAPIUninitialize)(); } FreeLibrary( hMapiModule ); } return 0; }

    Read the article

  • Various problems with software raid1 array built with Samsung 840 Pro SSDs

    - by Andy B
    I am bringing to ServerFault a problem that is tormenting me for 6+ months. I have a CentOS 6 (64bit) server with an md software raid-1 array with 2 x Samsung 840 Pro SSDs (512GB). Problems: Serious write speed problems: root [~]# time dd if=arch.tar.gz of=test4 bs=2M oflag=sync 146+1 records in 146+1 records out 307191761 bytes (307 MB) copied, 23.6788 s, 13.0 MB/s real 0m23.680s user 0m0.000s sys 0m0.932s When doing the above (or any other larger copy) the load spikes to unbelievable values (even over 100) going up from ~ 1. When doing the above I've also noticed very weird iostat results: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 1589.50 0.00 54.00 0.00 13148.00 243.48 0.60 11.17 0.46 2.50 sdb 0.00 1627.50 0.00 16.50 0.00 9524.00 577.21 144.25 1439.33 60.61 100.00 md1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 0.00 1602.00 0.00 12816.00 8.00 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 And it keeps it this way until it actually writes the file to the device (out from swap/cache/memory). The problem is that the second SSD in the array has svctm and await roughly 100 times larger than the second. For some reason the wear is different between the 2 members of the array root [~]# smartctl --attributes /dev/sda | grep -i wear 177 Wear_Leveling_Count 0x0013 094% 094 000 Pre-fail Always - 180 root [~]# smartctl --attributes /dev/sdb | grep -i wear 177 Wear_Leveling_Count 0x0013 070% 070 000 Pre-fail Always - 1005 The first SSD has a wear of 6% while the second SSD has a wear of 30%!! It's like the second SSD in the array works at least 5 times as hard as the first one as proven by the first iteration of iostat (the averages since reboot): Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 10.44 51.06 790.39 125.41 8803.98 1633.11 11.40 0.33 0.37 0.06 5.64 sdb 9.53 58.35 322.37 118.11 4835.59 1633.11 14.69 0.33 0.76 0.29 12.97 md1 0.00 0.00 1.88 1.33 15.07 10.68 8.00 0.00 0.00 0.00 0.00 md2 0.00 0.00 1109.02 173.12 10881.59 1620.39 9.75 0.00 0.00 0.00 0.00 md0 0.00 0.00 0.41 0.01 3.10 0.02 7.42 0.00 0.00 0.00 0.00 What I've tried: I've updated the firmware to DXM05B0Q (following reports of dramatic improvements for 840Ps after this update). I have looked for "hard resetting link" in dmesg to check for cable/backplane issues but nothing. I have checked the alignment and I believe they are aligned correctly (1MB boundary, listing below) I have checked /proc/mdstat and the array is Optimal (second listing below). root [~]# fdisk -ul /dev/sda Disk /dev/sda: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00026d59 Device Boot Start End Blocks Id System /dev/sda1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sda2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sda3 4605952 814106623 404750336 fd Linux raid autodetect root [~]# fdisk -ul /dev/sdb Disk /dev/sdb: 512.1 GB, 512110190592 bytes 255 heads, 63 sectors/track, 62260 cylinders, total 1000215216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dede Device Boot Start End Blocks Id System /dev/sdb1 2048 4196351 2097152 fd Linux raid autodetect Partition 1 does not end on cylinder boundary. /dev/sdb2 * 4196352 4605951 204800 fd Linux raid autodetect Partition 2 does not end on cylinder boundary. /dev/sdb3 4605952 814106623 404750336 fd Linux raid autodetect /proc/mdstat root # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb2[1] sda2[0] 204736 blocks super 1.0 [2/2] [UU] md2 : active raid1 sdb3[1] sda3[0] 404750144 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb1[1] sda1[0] 2096064 blocks super 1.1 [2/2] [UU] unused devices: Running a read test with hdparm root [~]# hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 664 MB in 3.00 seconds = 221.33 MB/sec root [~]# hdparm -t /dev/sdb /dev/sdb: Timing buffered disk reads: 288 MB in 3.01 seconds = 95.77 MB/sec But look what happens if I add --direct root [~]# hdparm --direct -t /dev/sda /dev/sda: Timing O_DIRECT disk reads: 788 MB in 3.01 seconds = 262.08 MB/sec root [~]# hdparm --direct -t /dev/sdb /dev/sdb: Timing O_DIRECT disk reads: 534 MB in 3.02 seconds = 176.90 MB/sec Both tests increase but /dev/sdb doubles while /dev/sda increases maybe 20%. I just don't know what to make of this. As suggested by Mr. Wagner I've done another read test with dd this time and it confirms the hdparm test: root [/home2]# dd if=/dev/sda of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 38.0855 s, 282 MB/s root [/home2]# dd if=/dev/sdb of=/dev/null bs=1G count=10 10+0 records in 10+0 records out 10737418240 bytes (11 GB) copied, 115.24 s, 93.2 MB/s So sda is 3 times faster than sdb. Or maybe sdb is doing also something else besides what sda does. Is there some way to find out if sdb is doing more than what sda does? UPDATE Again, as suggested by Mr. Wagner, I have swapped the 2 SSDs. And as he thought it would happen, the problem moved from sdb to sda. So I guess I'll RMA one of the SSDs. I wonder if the cage might be problematic. What is wrong with this array? Please help!

    Read the article

  • http server connectivity puzzle

    - by jpmartins
    I have been seeing some strange connection issue in the production environment. The setup has two IBM Http Server's (IHS) and a network IP load-balancer in front of them (round-robin). One instance the system is working fine, the next requests stop arriving at the IHS. Telnet directly to port 80 of the IHS is established sucessfully, but connection to the port 80 through the IP of the load-balancer fails! The puzzle comes next, the network admins say the load-balancer is working fine. When we finally reboot the IHS servers and request start flowing... The situation happened three times the last month and no obvious pattern was found. Any debug ideas?

    Read the article

  • Windows 8 sleep, blank screen on resume

    - by Waqar Hameed
    I use a SONY VAIO laptop VPCS116FA that came pre-installed with Windows 7. I have upgraded to Windows 8. The experience has been good so far but I am facing a frustrating problem. Whenever, I put my computer to sleep, it won't wake up and I have to restart it. Note that this does not affect hibernation which works perfectly fine. However, it is not a perfect alternative to sleep option. I have tried this method to resolve the problem but in vain: http://www.thulasidas.com/2009-03/blank-screen-after-hibernatesleep.htm I really am in need for a solution to this problem. Please suggest whatever you think can resolve this.

    Read the article

  • Windows authenticated users have lost access to master (default) database

    - by Rob Nicholson
    Something very strange has occurred on our production SQL database. Users connecting via Windows authentication appear to have lost all access to the master database. By default, all logins have the default database set to master. So when you connect using SQL Server management studio, they get the error: "Cannot open user default database. Login failed error 4064". What's also worrying is that we have a group called "COMPANY - SQL Administrator" which has sysadmin rights and users in this group also get the same error. Worse, they don't appear to be system administrators anymore... If they change their default database to something else, they can connect and then work on the database, it's just the master database that is problematic. I'm not even sure by what mechanism windows authenticated users get access to the master database. Is it something hard coded in or some property that's got changed? Any ideas? Cheers, Rob.

    Read the article

  • Upgrade a legacy, expired RHEL 3.4.6 server to modern Centos/Scientific Linux

    - by Gabriel Tasiopoulos
    I have a machine running a legacy J2EE app. The code is not Maven-ized and it works with pretty old Java and Postgres versions. I have converted it to a VM in ESXI and I'd like to try to upgrade it to a modern, binary-compatible version of RHEL (Centos or Scientific LInux) and see if things would still work. Where should I start? Am I being too optimistic with this one? It's more of an experiment and I'm not doing it on a production machine. But given the OS is pretty old I am looking for a way to do this eventually. Many thanks

    Read the article

  • Less reboots on Windows Server Core, is this true or just a myth?

    - by Peter Hahndorf
    Because there are less components installed on a Windows Server core OS, it needs less patches than the full OS. I read in several places that therefor it needs less reboots after patching. I'm running Server 2012 core in production since September 2012 now and I don't remember a single patch-Tuesday when I did not have to reboot the server after installing Windows updates. Are there any hard numbers out there that compare the required reboots for core vs. Full OS? Less reboots may be the main reason why people choose to go with Server core. If it actually requires just as many reboots as the full OS install, they may think again the next time they set up a server.

    Read the article

  • Why is MySQL making the CPU run at about 80%?

    - by Robert
    MySQL is eating up about 80% of my CPU for no reason as far as I can see. Right now this server is rarely used, more of a test site I set up that will eventually be a used for production once I fix small problems like this. I run 3 instances of MySQL but it seems that my first instance is taking up all the CPU. When I turn off the first instance and leave the other two on everything runs fine. Any suggestions? I tried Show Processlist and no statements are being run besides "Sleep" and the query "Show Processlist" (obviously) at the time it's using up all this CPU. my.cnf is basic. I did not optimize or change any MySQL settings. Do you think this would cause such strange behavior? The machine is running Linux Centos 5.7 64 bit and MySQL 5.0.95. Thanks

    Read the article

  • Grub os-probe showing deleted Windows installation

    - by Sanjay
    I recently purchased a Dell Vostro notebook with Ubuntu Netbook edition 10.04 pre-installed. I tried adding a partition and installed Windows XP but it didn't work out due to too many partitions in the system already. Now I have restored the laptop to factory setting using Dell Utility partition and the Windows partition is completely deleted, however my grub2 still shows Windows on /dev/sda2 sudo os-probe Microsoft Windows XP Professional (on /dev/sda2) Any ideas how to remove it from grub? I know I can remove /etc/grub.d/30_os-probe but I am more interested in why os-probe is showing the deleted partition.

    Read the article

  • writing data onto a linux live-dvd

    - by stanleyxu2005
    I have a server machine with a dvd-writer. I want to burn a linux live-dvd (openSUSE is preferred) with a pre-configured web server, so that after booting the web server should be ready to serve. The web server has a sqlite database (with very less data). But after rebooting the system, all data in the database will get lost. Is it possible to store all necessary data onto this live-dvd as well? If it were a usb drive. I would create two partition and mount the second partition with read-write permission. But I have no idea how to create two partition onto a dvd Any hint is appreciated

    Read the article

  • Server 2008 unresponsive after SP2 install.

    - by Dan
    I have a dev server that has an exact image of a production web server. The prod server only has SP1 installed on it. When I first fired up the dev box, the first thing I did was install SP2, and let it be. Almost every morning when I came in, the server was unusable. It would respond to ping, but RDP and the web site running on it were down. On the screen the screen saver was bouncing around, so it wasn't hard locked. But it was unresponsive to keyboard and mouse. So now I have to hard shut it down, but when it comes back up, the only thing in the event viewer is the unexpected shutdown, nothing else. I've since taken a fresh image of my prod box and put it on the dev server, and not installed SP2, and the dev box is humming along perfectly. I should also note that this is Server2k8 Web, 64bit Has anyone else seen anything like this?

    Read the article

  • Why can't I open my Access application in design mode?

    - by mmyers
    I have been given an Access 2007 application (mainly VB code) that I need to modify. It has been locked down for production, so the toolbars and so forth are not visible. However, it is a .mdb file, not .mde, so in theory it should be possible to get into design mode by holding Shift while opening it. But that method has only worked a total of three times out of the (probably) 60 or 70 times I've tried. I realize now that I should have enabled the toolbars while I had it open, but unfortunately hindsight doesn't get me anywhere now. Does anyone know what might be causing the problem? Is it my own fault, or the application's, or Access's?

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >