Search Results

Search found 20852 results on 835 pages for 'intellij idea'.

Page 576/835 | < Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >

  • Can VMWare Server 2.0 be useful in Production for easing backups?

    - by Keith Sirmons
    Howdy, Let's run this idea by the group here. I am thinking about using VMWare Server in production to host a 2008 Domain Controller with DHCP and DNS, a 2008 member server with WSUS, some virus software, and other "management" utilities a second 2008 member server with SQL, IIS, and File Shares for a medium business of 50-100 desktops. The reason I am leaning toward Server vs ESXi is for backup purposes. Using ESXi, if I want to backup the VM's, I would need a second server in the office with enough storage availability to hold a copy of the vmdks. I am wondering if putting this virtual environment on top of a basic 2008 server install will allow for easier backups to both tape and/or to offsite storage using JungleDisk. Can a snapshot be triggered easily via a scheduled job? I know this doesn't necessarily handle file level restores, but I want to make sure in a DR situation, we can restore production servers quickly. Does this concept hold water? Would a very minimum install of the 2008 Host remove too many resources from the actual production machines? This would be a new Dell 410 server with 12 GB ram and (6) 600 GB 15K in a RAID 6, Dual Intel Xeon 2.26GHz procs.

    Read the article

  • XAMPP - Apache service stops running after few seconds.

    - by Fábio Antunes
    Hello I have this big problem with my Xampp server, for some reason the Apache service stops running after a few seconds it as been started, and i have no idea what the problem is, and the error logs don't say much about the problem. [Fri May 07 01:09:32 2010] [notice] Digest: generating secret for digest authentication ... [Fri May 07 01:09:32 2010] [notice] Digest: done [Fri May 07 01:09:33 2010] [notice] Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations [Fri May 07 01:09:33 2010] [notice] Server built: Nov 11 2009 14:29:03 [Fri May 07 01:09:33 2010] [crit] (22)Invalid argument: Parent: Failed to create the child process. [Fri May 07 01:09:33 2010] [crit] (OS 6)O identificador é inválido. : master_main: create child process failed. Exiting. [Fri May 07 01:09:33 2010] [notice] Parent: Forcing termination of child process 36 identificador é inválido (pt_PT) = identifier is invalid. Note: No other applications is using the Apache port. I have done some changes to the httpd.conf file but, it as worked well for allot of time. Added some virtual hosts. Enabled xdebug. As this happen to anyone, that could tell me whats the problem? Thanks for your time.

    Read the article

  • SQL Server 2008 Unique Problem for bring DB Online...

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas? I'm running SQL Server 2008 Enterprise edition on Windows Server 2008 R2 64bit

    Read the article

  • Can not join additional domain controllers

    - by Hosm
    Hi all, I had a dead PDC and another not so synced domain controller for my domain. using comments here link now the so called secondary domain controller has seized domain controls and I can verify it from dsa.msc that it is a domain controller. I set up another domain controller (win2003SRV) and about to promote an AD on it as a domain controller for my domain. When I try to join the new domain controller to the domain I face DNS problem. here is some more detail DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain DOMNAME.A.B: The query was for the SRV record for _ldap._tcp.dc._msdcs.DOMNAME.A.B The following domain controllers were identified by the query: update.DOMNAME.A.B Common causes of this error include: - Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. For information about correcting this problem, click Help. it is worth noting that update.DOMNAME.A.B is the current domain controller to which I'd like to add another controller named PDC.DOMNAME.A.B Ip address of update.DOMNAME.A.B is 192.168.200.1 and for pdc.DOMNAME.A.B is 192.168.200.100 querying DNS on both machine return correct results. Any idea?

    Read the article

  • yum install php-devel amongst other commands returning problems

    - by user3791722
    I run yum install php-devel and it returns this. Typically I'd just run it with --skip-broken, but when I do, it still doesn't do the trick. Available: php-common-5.3.3-22.el6.x86_64 (rhel-x86_64-server-6) php-common(x86-64) = 5.3.3-22.el6 Available: php-common-5.3.3-23.el6_4.x86_64 (rhel-x86_64-server-6) php-common(x86-64) = 5.3.3-23.el6_4 Available: php-common-5.3.3-26.el6.x86_64 (rhel-x86_64-server-6) php-common(x86-64) = 5.3.3-26.el6 Available: php54w-common-5.4.29-2.w6.x86_64 (webtatic) php-common(x86-64) = 5.4.29-2.w6 Available: php54w-common-5.4.30-1.w6.x86_64 (webtatic) php-common(x86-64) = 5.4.30-1.w6 Available: php55w-common-5.5.13-2.w6.x86_64 (webtatic) php-common(x86-64) = 5.5.13-2.w6 Installing: php55w-common-5.5.14-1.w6.x86_64 (webtatic) php-common(x86-64) = 5.5.14-1.w6 You could try using --skip-broken to work around the problem When run with --skip-broken it returns this at the end: Packages skipped because of dependency problems: autoconf-2.63-5.1.el6.noarch from rhel-x86_64-server-6 automake-1.11.1-4.el6.noarch from rhel-x86_64-server-6 pcre-devel-7.8-6.el6.x86_64 from rhel-x86_64-server-6 php-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php-cli-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php-common-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php-mysql-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php-pdo-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php-soap-5.3.3-27.el6_5.1.x86_64 from rhel-x86_64-server-6 php55w-cli-5.5.14-1.w6.x86_64 from webtatic php55w-common-5.5.14-1.w6.x86_64 from webtatic php55w-devel-5.5.14-1.w6.x86_64 from webtatic This problem has arisen with a few other similar commands when installing something related to php, except I've just done without them. I need to install this for something I'm trying to do. I do remember upgrading to PHP 5.4 and our entire infrastructure coming down due to it requiring PHP 5.3, so I downgraded as quick as possible to get everything back running and that may contribute to the issue. If you have any idea why this is happening and how I could get the package on the system while remaining on PHP 5.3, please let me know. Thanks.

    Read the article

  • Install Linux Mint on PC without bootable CD

    - by crosenblum
    Unfortunately my PC's CD drive is not bootable; I have such a mixture of SATA and IDE drives, so until I have more money to redo my controller setup, I can't boot from any cd. Currently, I have a DVD burned with latest version of Linux Mint, and I have an USB drive with an old version of Mint. I have a partition ready to install Linx Mint into, but no idea how to install it, since I can only boot to my hard drive. I am totally unable to boot to CD, so that is definitely out. My main partition is WinXP Pro SP3. Is there software I can use to format my Linux partition, so that I can then just copy Mint over to that partition? Or is there a better way to install linux mint? I have to do it within Windows XP, since that's all that I can boot right now. I have considered Mint4Win, but that doesn't allow a full installation of Linux Mint. Any ideas?

    Read the article

  • Apache2 - mod_rewrite : RequestHeader and environment variables

    - by Guillaume
    I try to get the value of the request parameter "authorization" and to store it in the header "Authorization" of the request. The first rewrite rule works fine. In the second rewrite rule the value of $2 does not seem to be stored in the environement variable. As a consequence the request header "Authorization" is empty. Any idea ? Thanks. <VirtualHost *:8010> RewriteLog "/var/apache2/logs/rewrite.log" RewriteLogLevel 9 RewriteEngine On RewriteRule ^/(.*)&authorization=@(.*)@(.*) http://<ip>:<port>/$1&authorization=@$2@$3 [L,P] RewriteRule ^/(.*)&authorization=@(.*)@(.*) - [E=AUTHORIZATION:$2,NE] RequestHeader add "Authorization" "%{AUTHORIZATION}e" </VirtualHost> I need to handle several cases because sometimes parameters are in the path and sometines they are in the query. Depending on the user. This last case fails. The header value for AUTHORIZATION looks empty. # if the query string includes the authorization parameter RewriteCond %{QUERY_STRING} ^(.*)authorization=@(.*)@(.*)$ # keep the value of the parameter in the AUTHORIZATION variable and redirect RewriteRule ^/(.*) http://<ip>:<port>/ [E=AUTHORIZATION:%2,NE,L,P] # add the value of AUTHORIZATION in the header RequestHeader add "Authorization" "%{AUTHORIZATION}e"

    Read the article

  • Slow VM on esxi 4.1

    - by user57432
    We have a FreeBSD 64bit running on a esxi 4.1, the hardware platform is a DELL R710 with 2 x 56xx (intel 6core cpu) and 48 GB ram. The FreeBSD vm is very slow, when we compiles/builds something on it, it takes 5 minuts and it says "build time 18 seconds.". There's no vmtools installed on the vm. The same vm is installaed on another R710 running esxi 4.0 for dell and there's no problems with that one. Does anyone have any idea about what to look for? the VMs on the second server (ESXi 4.1) is a clone of the VMs running on the first VMserver (ESXi 4.0 Dell edition). It's not possible for me to move the VM back to the first server since the file contaning the vm is too big. We installed the new esxi with a datasore with 8mb blocks because 1mb blocks dident allow for the file size we needed. It looks like the www server on the new ESXi 4.1 works fine, but I havent really tested it. There's not installed vmtools on any of the VMs (FreeBSD). The block size on the second VM (ESXi 4.1) datastorage is 8mb and 1mb on the first (ESXi 4.0)

    Read the article

  • Windows 7 alt-tab window disappears to back when aero peek is enabled

    - by nhinkle
    There is a strange bug with Aero Peek and Alt-Tab where the Alt-Tab window itself is sent all of the way to the back, and is obscured by whatever window may be being previewed in front of it. While it still works for switching tabs, it's extremely annoying to not be able to see what other windows are there. One solution I've found is to just disable Aero Peek, but I like the Aero Peek feature when it works, and want it enabled. Some users of Lenovo products have found that uninstalling the "Thinkvantage communications" VoIP suite fixes it for them, but my laptop is an HP, not a Lenovo. The only VoIP software I have installed is Skype, which I removed to see if it had something to do with VoIP, but that didn't have any effect. I had seen this issue before on my old laptop, but it usually went away after a while, and always after a reboot. On my new computer (HP dm4t) it always occurs, and is driving me nuts. If anybody can actually pinpoint what the problem is and more importantly how to fix it, I will be extremely thankful. Window being previewed is in front of the Alt-Tab window Update: The issue seems to have randomly resolved itself, at least for now. I have no idea what changed, since I haven't made any modifications to the system between when it was broken and when it started working. Attached is another screenshot of it working properly, with the alt-tab window in front of everything else. I'd still like to know what causes this if anybody can determine it. Update 2: And now it's broken again...

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

  • Can reprepro accept a new version of a package into the repository?

    - by kai
    I have installed a package into my own debian package repository like so: $ sudo reprepro -b /var/packages/ubuntu includedeb maverick my-package_0.8-0_all.deb my-package_0.8-0_all.deb: component guessed as 'main' Exporting indices... I have installed my package on a few machines using apt-get install. I have now added new features to my software and would like to add a new minor version of my package to the repository so that I may update my machines using apt-get upgrade. I try to do this like so: $ sudo reprepro -b /var/packages/ubuntu includedeb maverick my-package_0.9-0_all.deb my-package_0.9-0_all.deb: component guessed as 'main' Skipping inclusion of 'my-package' '1.0-0' in 'maverick|main|i386', as it has already '1.0-0'. Skipping inclusion of 'my-package' '1.0-0' in 'maverick|main|amd64', as it has already '1.0-0'. It looks like I need to tell reprepro that this is a new version of the same package but I have no idea how to do this. I have read the reprepro man page several times and searched on the net for a couple of hours but I have not found any answers. Am I missing something? Many thanks.

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0?

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). xxxx:xxxx:xxxx/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else?

    Read the article

  • HTTP Error 503. The service is unavailable

    - by user1671639
    I'm struggling to setup the environment in IIS8, I searched a lot but couldn't find a right solution. I checked the error logs, but no idea. C:\Windows\System32\LogFiles\HTTPERR 2013-10-09 09:28:39 192.168.43.205 60172 192.168.43.205 80 HTTP/1.1 GET / 503 2 AppOffline qa.hti.local 2013-10-09 09:28:39 192.168.43.205 60192 192.168.43.205 80 HTTP/1.1 GET /favicon.ico 503 2 AppOffline qa.hti.local Then in Event Viewer: WARNINGS: A listener channel for protocol 'http' in worker process '11188' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '7492' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '9088' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '9964' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. A listener channel for protocol 'http' in worker process '7716' serving application pool 'qa.hti.local' reported a listener channel failure. The data field contains the error number. I don't understand what the warning means. ERROR: Application pool 'qa.hti.local' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Note: I learned that consecutive 5 failures leads to APP Pool crash, and this can increased. I also tried increasing this but no success. OS: Windows server 2012 IIS Version: 8 Please share your thoughts.

    Read the article

  • Short POST data in HTTP

    - by Matt
    We're hosting a customer's Debian Linux web server. It's running a PHP based web application. The server is sitting behind our firewall with it's own virtual interface and port 80 is forwarded internally to a machine sitting in the DMZ. The issue we're having is that when data is posted to the server it seems to be being cut short for some users. It's reproducable for some users on the same box. But the same user sending the same data on the same lan on another PC it works. The data gets cut to around 1140 bytes I'm told. Any idea why this might be happening? The customer is blaming our firewall, but then surely we'd have issues with other services. I'm suspecting it's a problem with the website itself. Suggestions on how to isolate the problem would be of help. Our firewall is Astaro. EDIT: A customer has set the ethernet frame size temporarily to 500bytes on the server. This made it work for now! I know some of the customers are using an internet provider that runs PPPoE

    Read the article

  • Page reload needed several times before loading normally

    - by tim peterson
    Sorry my question is so vague I just have no idea where to start in solving it and am quite a novice with servers. Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only thing to my knowledge that I changed is that I installed mod_pagespeed https://developers.google.com/speed/pagespeed/mod. However, I have since turned it off by setting my pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. I'm wondering general advice on how to troubleshoot this problem. For instance are there linux commands to check page loading peformance? Also, it looks like I have lots of new error.logs in my /var/log/apache2 directory which i believe weren't there a few months ago. lots of this : error.log RewriteLog.log.24.gz ssl_access.log.40.gz error.log.1 RewriteLog.log.25.gz ssl_access.log.41.gz error.log.10.gz RewriteLog.log.26.gz ssl_access.log.42.gz error.log.11.gz RewriteLog.log.27.gz any thoughts? thank you, tim

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • Outlook users connected to exchange can email from other email accounts

    - by Sherriffwoody
    We have found an issue on our systems whereby an outlook user (both 2007 and 2010) connected to our Exchange server (2007) can send emails as other users using the following steps Within Outlook Click <New Email> Select the <From> button to show a list of accounts outlook contains, but it also shows the option Select<Other Email Address>. This brings up a small dialog box with another button which when selected allows the user to select an email from their contacts or the Active Directory. The user in most cases can select any email within the Active Directory and send an email as if it were coming from that selected email. It seems not everyone has this ability and I'm guessing it is something to do with settings in exchange or AD(version 6) or is there a group policy that can be implemented to stop users being able to do this. We have no idea what allows this and I have failed to find anything using Dr Google. No one has setup delegates within outlook but it does seem to be something similar? Does anyone know how to lock this down? Thanks in advance

    Read the article

  • Leopard Network Shares and browsing are unreliable

    - by EvilChookie
    I have two macs, running Leopard 10.5.8. One is the 13" MBP connected via WiFi, and the other is a 24" 2008 iMac, connected via ethernet. There are at least another 6-10 machines (windows and mac) awake on the network (with shares) at any given time, yet there are plenty of times where I cannot see any devices/shares in either my "Shared" section in Finder, nor can I see any computers in "Network" in Finder. Restarting doesn't help. I've restarted all the networking gear in the house to no avail. Our network is a series of gigabit switches connected to a D-Link gaming router. I believe we use OpenDNS, and our provider is Cox. I hate having to use "Go - Connect to Server" to browse to commonly used file shares (by IP). I'd like to know why my shares do not always and consistently appear in Leopard. Edit: I ran OnyX this morning, and performed the cleaning and maintenance operations (including disk permissions) on both my Macs, and at least one of my macs has started showing network devices again. (the other is still going). No idea how long this will last. Any ideas as to what is causing this issue, and how to prevent it? Edit 2: Aaaand there the shares go again. So running OnyX is not a permanent or reliable fix for this issue. Edit 3: After a clean reinstall and update, network shares are still unreliable. The SMBClient command mentioned in comments shows me the information it's supposed to show, but the shares do not appear in the shared section. They'll also vanish at random and reappear at random throughout the day.

    Read the article

  • How do I fix issue causing "incomplete startup packet" log message trying to implement replication in Postgresql?

    - by colour me brad
    I've got two cloud servers running Ubuntu 13.04 and PostgreSQL 9.2. I've primarily used this blog post to aid me in setting things up. However, to do the initial database dump to the slave I'm using pg_start_backup/pg_stop_backup strategy used in this other blog post. I've read through the docs and postgres wikis as well. I ran into several problems I was able to solve, but I can't get past this wretched "the database is starting up" failure. I'm not sure if seeing "cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory" after "consistent recover state reached" is normal or the first sign of a problem. The searching I've done on "the database is starting up" and "incomplete startup packet" tells me that something is sending empty TCP packets to the slave. The only thing that even knows about the slave is the master, so I'm not sure why it's sending empty packets... Has anyone worked with this and have an idea what might be going wrong? The postgres log on the slave looks like so: 2013-08-26 13:01:38 CDT LOG: entering standby mode 2013-08-26 13:01:38 CDT LOG: restored log file "000000010000000000000039" from archive 2013-08-26 13:01:38 CDT LOG: incomplete startup packet 2013-08-26 13:01:39 CDT LOG: redo starts at 0/39000020 2013-08-26 13:01:39 CDT LOG: consistent recovery state reached at 0/390000E0 cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory 2013-08-26 13:01:39 CDT LOG: streaming replication successfully connected to primary 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:41 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT LOG: incomplete startup packet 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up thanks! brad

    Read the article

  • Has anyone used the sharedband connection bonding product?

    - by John Rennie
    See http://www.sharedband.com/ for details on the product. Obviously Sharedband aren't too keen on giving away their technical secrets, but I would guess that it bonds the connections at the IP layer i.e. their routers send the IP packets to the SharedBand routers over all available lines and the ShareBand routers handle all the virtual circuitry and provide the NATing to whatever IP address(es) they've assigned you. It looks a clever idea, and a good way to provide some resilience over ADSL links. You can even use ADSL links from different ISPs and SharedBand will still bond them for you. But, I find myself wondering how well it really works, and whether it's worth it. The Draytek routers can already load balance (though not bond) up to four ADSL lines, so the SharedBand product really only offers an advantage if you're hosting servers i.e. you can have one IP address to accept incoming connections through all your (working) ADSL lines. But should you really try and host servers using ADSL lines, especially since ADSL upload performance isn't stellar? Wouldn't it be better to use a hosted server, or maybe pay up for a leased line with a SLA? So I'm asking if anyone is using SharedBand, and if so what do you think of it? JR

    Read the article

  • QT Creator 64-bit Snow Leopard

    - by quadelirus
    I have a bunch of libraries that I need to link against that I installed via macports. They are 64-bit libraries. I'm working on an application written with QT Creator and the .pro is set up. I downloaded the QT SDK for Mac OS X, but it is 32-bit and so the compiled code won't link against the 64-bit binaries that I got from macports. Ok. So I downloaded the QT SDK source and built from source using -arch x86_64. Now I have a 64-bit version of the SDK (I think) but it didn't build a QT Creator app. So. I need to know one of 4 things: Either, 1.) I'm guessing that a simple make command will convince the QT SDK to build the creator for me. If this is true, then what is the command (make creator?). barring that, I need to know 2.) The easiest way to get MacPorts to redownload the libraries that I installed with a 32-bit version (I keep seeing a "+universal" mentioned, but I haven't seen it on a line, and simply calling ports +universal install XYZ doesn't seem to work--perhaps I need to uninstall and reinstall the package?). Also, is this a stupid idea? or 3.) Someone who actually has a prebuilt 64-bit QT SDK installer so I don't have to mess with this. It is ridiculous that QT doesn't already have this available, in my opinion--SL has been out since, what, last August? 4--and this would take the cake.) I don't understand why I can't simply put a "compile-for-64-bit stupid" command directly into the QT pro file and have it build. There isn't really a reason why a compiler compiled in 32-bits couldn't compile to 64-bits is there? Thanks.

    Read the article

  • Should I use /etc/bind/zones/ or /var/cache/bind/?

    - by nbolton
    Each tutorial seems to have a different opinion on this. For my ISC BIND zones, should I use /etc/bind/zones/ or /var/cache/bind/? In the last install, I used /var/cache/bind/ but only because I was guided to do so; however I just spotted a pid file in there for this new Debian install, so I figured that using the "working directory" to store zone files probably wasn't the best idea. It seems that many admins use this so they don't have to type the full path when declaring a new zone. For example: file "/etc/bind/zones/db.foobar.com"; Instead of: file "db.foobar.com"; Is obviously easier to type, but is it good or bad practice? Some may also suggest setting the working directory to /etc/bind/zones: options { // directory "/var/cache/bind"; directory "/etc/bind/zones"; } ... but something tells me this isn't good practice, since the pid file would be created there I assume (unless it's just in /var/cache/bind by coincidence). I took a look at the manpage but it didn't seem to say what the directory option was for, any ideas exactly what it was design for?

    Read the article

  • IIS6 Multiple SSL websites to a single HTTP website?

    - by docflabby
    Running a IIS6 server on Windows 2003. All the websites use ASP.NET I have a number of websites all running separate HTTP websites: www.domain1.com www.domain2.com www.domain3.com I have a separate HTTPS website www.secure.com These websites are all running on the same server. I now wish to intergrate the content of www.secure.com into each of the domains in a transparent way. Such that each website despite having its own SSL connection displays the same website. The complicatrion is www.secure.com needs to know which website the connection has come from to apply the appropriate branding. The idea behind this is to have only one website, and location, but it keeps the core website brand. https://domain1.com looks alot better from a marketing point of view (and avoids users getting confused about what our secure website is) SSL www.domain1.com/secure - displays www.secure.com (branded domain1) SSL www.domain2.com/secure - displays www.secure.com (branded domain2) SSL www.domain3.com/secure - displays www.secure.com (branded domain3) How would the best way of achieving this, i'm open to using additional software if necessery. Would a reverse proxy be sutible for this situation?

    Read the article

  • Xen dom0 reports incorrect amount of RAM with dom0_mem set

    - by xen_amnesiac
    I've done a fair bit of searching about this, but have found nothing that answers my question. I have a system with 6GB of RAM which acts as a Xen server. For reference, it runs Ubuntu 12.04. I've set the kernel parameter dom0_mem:512M,max:512M in /etc/default/grub as follows: GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=min:512M,max:512M" I've tried variations of that, with the same result. My question is this: With the above set, the dom0 reports in all applications a RAM amount of 422M. cat /proc/meminfo gives the following: $ cat /proc/meminfo MemTotal: 432472 kB MemFree: 54144 kB Buffers: 17640 kB Cached: 220104 kB SwapCached: 30172 kB Active: 136500 kB Inactive: 167780 kB Active(anon): 6156 kB Inactive(anon): 60516 kB Active(file): 130344 kB Inactive(file): 107264 kB Unevictable: 52 kB Mlocked: 52 kB SwapTotal: 1794044 kB SwapFree: 1682012 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 39572 kB Mapped: 8048 kB Shmem: 136 kB Slab: 44324 kB SReclaimable: 22012 kB SUnreclaim: 22312 kB KernelStack: 1280 kB PageTables: 3840 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2010280 kB Committed_AS: 329192 kB VmallocTotal: 34359738367 kB VmallocUsed: 313988 kB VmallocChunk: 34359417340 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 524696 kB DirectMap2M: 0 kB top, htop, free -m, and byobu's RAM monitor all report the same amount. At first I thought this was because of the onboard graphics borrowing some memory, but have now switched to a dedicated GPU and it persists. Is this normal behavior, or has something gone amiss? It's just about 100MB of RAM that's "gone", and I have no idea where it went. I understand that it's normal that not all RAM is available for allocation, but does the system really take an amount relatively high to the amount of RAM available?

    Read the article

  • Don't let the mouse wake up displays from standby

    - by progo
    I like to put my displays to powersave/standby mode when I leave the computer for a while. It would be ok if it weren't for oversensitive mouse. Sometimes the driver reads in some movement that's not visible to the naked eye (the cursor, that is) and it breaks the power save. It would wait for another 10 minutes before going back to its standby. My workaround is the following script bound to C-S-q: xlock -startCmd 'xset dpms 2 2 2' -endCmd 'xset dpms 600 1200 1300' -mode blank -echokeys -timeelapsed +usefirst By using xset I set the values to 2 seconds each before going to standby. It's not nice, anyway. Sometimes there are cool fortunes that I want to read before typing in the password. I could keep the cursor moving but it's cludgy. (By the way, xlock's option mousemotion doesn't help -- it just hides the cursor but the displays fire up nevertheless.) So the question: is there a way to make displays go standby and stay there until a keyboard key is pressed? I'm running gentoo and recent Xorg, but I hope the answer doesn't have to be distro-specific. Basically the answer can be as simple as how to enable/disable mouse within command line? It think that would do the job if DPMS doesn't know the idea.

    Read the article

< Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >